Test Report: Docker_Linux_containerd 21594

                    
                      532dacb4acf31553658ff6b0bf62fcf9309f2277:2025-09-19:41507
                    
                

Test fail (14/329)

x
+
TestMultiControlPlane/serial/DeployApp (727.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- rollout status deployment/busybox
E0919 22:27:11.705454   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:25.077360   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:25.083919   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:25.095404   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:25.116872   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:25.158293   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:25.239850   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:25.401535   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:25.723258   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:26.365324   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:27.647387   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:30.209035   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:35.331291   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:39.414736   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:45.572748   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:28:06.054322   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:28:47.016195   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:30:08.941452   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:32:11.705971   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:32:25.079362   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:32:52.783294   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 kubectl -- rollout status deployment/busybox: exit status 1 (10m4.235325594s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 out of 3 new replicas have been updated...
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 0 of 8 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 2 of 3 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 22:35:23.143141   18210 retry.go:31] will retry after 622.646629ms: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 22:35:23.895894   18210 retry.go:31] will retry after 1.274079667s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 22:35:25.294863   18210 retry.go:31] will retry after 2.357002104s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 22:35:27.774780   18210 retry.go:31] will retry after 3.365068968s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 22:35:31.263515   18210 retry.go:31] will retry after 5.283067733s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 22:35:36.667794   18210 retry.go:31] will retry after 10.062930097s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 22:35:46.857777   18210 retry.go:31] will retry after 7.223020536s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 22:35:54.200871   18210 retry.go:31] will retry after 18.199948632s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 22:36:12.521583   18210 retry.go:31] will retry after 15.567553254s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0919 22:36:28.220548   18210 retry.go:31] will retry after 53.864648201s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0919 22:37:11.705879   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- nslookup kubernetes.io: exit status 1 (142.860919ms)

                                                
                                                
** stderr ** 
	error: Internal error occurred: unable to upgrade connection: container not found ("busybox")

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-7b57f96db7-jdczt could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- nslookup kubernetes.default: exit status 1 (134.43079ms)

                                                
                                                
** stderr ** 
	error: Internal error occurred: unable to upgrade connection: container not found ("busybox")

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-7b57f96db7-jdczt could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (134.80458ms)

                                                
                                                
** stderr ** 
	error: Internal error occurred: unable to upgrade connection: container not found ("busybox")

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-7b57f96db7-jdczt could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-326307
helpers_test.go:243: (dbg) docker inspect ha-326307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	        "Created": "2025-09-19T22:23:18.619000062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 69921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:23:18.670514121Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hosts",
	        "LogPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3-json.log",
	        "Name": "/ha-326307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-326307:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-326307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	                "LowerDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-326307",
	                "Source": "/var/lib/docker/volumes/ha-326307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-326307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-326307",
	                "name.minikube.sigs.k8s.io": "ha-326307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b9c61cd0152986e2b265b3cf0a7628b1c049e495ce30493b8e54f6b9446115f",
	            "SandboxKey": "/var/run/docker/netns/8b9c61cd0152",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-326307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:80:09:d2:65:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "465af21e2d8d112a34f14f7c3ab89eac4e6e57f582ff8e32b514381f55dd085e",
	                    "EndpointID": "f35735061c65841c2c1ba7f2859db25885582588fa8f2d14e3a015320f6c3fc4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-326307",
	                        "5e0f1fe86b08"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-326307 -n ha-326307
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 logs -n 25
E0919 22:37:25.077815   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-326307 logs -n 25: (1.330116508s)
helpers_test.go:260: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p functional-541880                                                                                                  │ functional-541880 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ start   │ ha-326307 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:25 UTC │
	│ kubectl │ ha-326307 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                      │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:25 UTC │ 19 Sep 25 22:25 UTC │
	│ kubectl │ ha-326307 kubectl -- rollout status deployment/busybox                                                                │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:25 UTC │                     │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                  │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                 │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- nslookup kubernetes.io                                          │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │                     │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- nslookup kubernetes.io                                          │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- nslookup kubernetes.io                                          │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- nslookup kubernetes.default                                     │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │                     │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- nslookup kubernetes.default                                     │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- nslookup kubernetes.default                                     │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- nslookup kubernetes.default.svc.cluster.local                   │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │                     │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- nslookup kubernetes.default.svc.cluster.local                   │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- nslookup kubernetes.default.svc.cluster.local                   │ ha-326307         │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:23:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:23:13.527478   69358 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:23:13.527574   69358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:13.527579   69358 out.go:374] Setting ErrFile to fd 2...
	I0919 22:23:13.527586   69358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:13.527823   69358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:23:13.528355   69358 out.go:368] Setting JSON to false
	I0919 22:23:13.529260   69358 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3938,"bootTime":1758316656,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:23:13.529345   69358 start.go:140] virtualization: kvm guest
	I0919 22:23:13.531661   69358 out.go:179] * [ha-326307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:23:13.533198   69358 notify.go:220] Checking for updates...
	I0919 22:23:13.533231   69358 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:23:13.534827   69358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:23:13.536340   69358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:23:13.537773   69358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 22:23:13.539372   69358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:23:13.541189   69358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:23:13.542697   69358 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:23:13.568228   69358 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:23:13.568380   69358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:13.622546   69358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:23:13.612893654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:13.622646   69358 docker.go:318] overlay module found
	I0919 22:23:13.624668   69358 out.go:179] * Using the docker driver based on user configuration
	I0919 22:23:13.626116   69358 start.go:304] selected driver: docker
	I0919 22:23:13.626134   69358 start.go:918] validating driver "docker" against <nil>
	I0919 22:23:13.626147   69358 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:23:13.626725   69358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:13.684385   69358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:23:13.672811393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:13.684569   69358 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:23:13.684775   69358 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:23:13.686618   69358 out.go:179] * Using Docker driver with root privileges
	I0919 22:23:13.687924   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:13.688000   69358 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:23:13.688014   69358 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:23:13.688089   69358 start.go:348] cluster config:
	{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m
0s}
	I0919 22:23:13.689601   69358 out.go:179] * Starting "ha-326307" primary control-plane node in "ha-326307" cluster
	I0919 22:23:13.691305   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:23:13.692823   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:23:13.694304   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:13.694378   69358 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 22:23:13.694398   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:23:13.694426   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:23:13.694515   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:23:13.694533   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:23:13.694981   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:13.695014   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json: {Name:mk9e3af266bcfbabd18624d7d22535c6f1841e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:13.716737   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:23:13.716759   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:23:13.716776   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:23:13.716797   69358 start.go:360] acquireMachinesLock for ha-326307: {Name:mk42b79b90944aab63c8b37c2f94e04ca1ebec1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:23:13.716893   69358 start.go:364] duration metric: took 80.537µs to acquireMachinesLock for "ha-326307"
	I0919 22:23:13.716915   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:13.716974   69358 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:23:13.719062   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:23:13.719317   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:23:13.719352   69358 client.go:168] LocalClient.Create starting
	I0919 22:23:13.719447   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:23:13.719502   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:13.719517   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:13.719580   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:23:13.719600   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:13.719610   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:13.719933   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:23:13.737609   69358 cli_runner.go:211] docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:23:13.737699   69358 network_create.go:284] running [docker network inspect ha-326307] to gather additional debugging logs...
	I0919 22:23:13.737725   69358 cli_runner.go:164] Run: docker network inspect ha-326307
	W0919 22:23:13.755400   69358 cli_runner.go:211] docker network inspect ha-326307 returned with exit code 1
	I0919 22:23:13.755437   69358 network_create.go:287] error running [docker network inspect ha-326307]: docker network inspect ha-326307: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-326307 not found
	I0919 22:23:13.755455   69358 network_create.go:289] output of [docker network inspect ha-326307]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-326307 not found
	
	** /stderr **
	I0919 22:23:13.755563   69358 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:13.774541   69358 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018eb270}
	I0919 22:23:13.774578   69358 network_create.go:124] attempt to create docker network ha-326307 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:23:13.774619   69358 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-326307 ha-326307
	I0919 22:23:13.834699   69358 network_create.go:108] docker network ha-326307 192.168.49.0/24 created
	I0919 22:23:13.834730   69358 kic.go:121] calculated static IP "192.168.49.2" for the "ha-326307" container
	I0919 22:23:13.834799   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:23:13.852316   69358 cli_runner.go:164] Run: docker volume create ha-326307 --label name.minikube.sigs.k8s.io=ha-326307 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:23:13.872969   69358 oci.go:103] Successfully created a docker volume ha-326307
	I0919 22:23:13.873115   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307 --entrypoint /usr/bin/test -v ha-326307:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:23:14.277718   69358 oci.go:107] Successfully prepared a docker volume ha-326307
	I0919 22:23:14.277762   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:14.277789   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:23:14.277852   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:23:18.547851   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.269954037s)
	I0919 22:23:18.547886   69358 kic.go:203] duration metric: took 4.270092787s to extract preloaded images to volume ...
	W0919 22:23:18.548002   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:23:18.548044   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:23:18.548091   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:23:18.602395   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307 --name ha-326307 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307 --network ha-326307 --ip 192.168.49.2 --volume ha-326307:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:23:18.902433   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Running}}
	I0919 22:23:18.923488   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:18.945324   69358 cli_runner.go:164] Run: docker exec ha-326307 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:23:18.998198   69358 oci.go:144] the created container "ha-326307" has a running status.
	I0919 22:23:18.998254   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa...
	I0919 22:23:19.305578   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:23:19.305639   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:23:19.338987   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:19.361057   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:23:19.361077   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:23:19.423644   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:19.446710   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:23:19.446815   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.468914   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.469178   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.469194   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:23:19.609654   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:23:19.609685   69358 ubuntu.go:182] provisioning hostname "ha-326307"
	I0919 22:23:19.609806   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.631352   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.631769   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.631790   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307 && echo "ha-326307" | sudo tee /etc/hostname
	I0919 22:23:19.783770   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:23:19.783868   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.802757   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.802967   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.802990   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:23:19.942778   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:23:19.942811   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:23:19.942925   69358 ubuntu.go:190] setting up certificates
	I0919 22:23:19.942949   69358 provision.go:84] configureAuth start
	I0919 22:23:19.943010   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:19.963444   69358 provision.go:143] copyHostCerts
	I0919 22:23:19.963491   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:19.963531   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:23:19.963541   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:19.963629   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:23:19.963778   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:19.963807   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:23:19.963811   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:19.963862   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:23:19.963997   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:19.964030   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:23:19.964040   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:19.964080   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:23:19.964187   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307 san=[127.0.0.1 192.168.49.2 ha-326307 localhost minikube]
	I0919 22:23:20.747311   69358 provision.go:177] copyRemoteCerts
	I0919 22:23:20.747377   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:23:20.747410   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:20.766468   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:20.866991   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:23:20.867057   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:23:20.897799   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:23:20.897858   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:23:20.925953   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:23:20.926026   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:23:20.954845   69358 provision.go:87] duration metric: took 1.011880735s to configureAuth
	I0919 22:23:20.954872   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:23:20.955074   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:20.955089   69358 machine.go:96] duration metric: took 1.508356629s to provisionDockerMachine
	I0919 22:23:20.955096   69358 client.go:171] duration metric: took 7.235738314s to LocalClient.Create
	I0919 22:23:20.955122   69358 start.go:167] duration metric: took 7.235806728s to libmachine.API.Create "ha-326307"
	I0919 22:23:20.955128   69358 start.go:293] postStartSetup for "ha-326307" (driver="docker")
	I0919 22:23:20.955136   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:23:20.955224   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:23:20.955259   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:20.975767   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.077921   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:23:21.081820   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:23:21.081872   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:23:21.081881   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:23:21.081888   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:23:21.081901   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:23:21.081973   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:23:21.082057   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:23:21.082071   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:23:21.082204   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:23:21.092245   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:21.123732   69358 start.go:296] duration metric: took 168.590139ms for postStartSetup
	I0919 22:23:21.124127   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:21.143109   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:21.143414   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:23:21.143466   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.162970   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.258062   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:23:21.263437   69358 start.go:128] duration metric: took 7.546444684s to createHost
	I0919 22:23:21.263491   69358 start.go:83] releasing machines lock for "ha-326307", held for 7.546570423s
	I0919 22:23:21.263561   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:21.282251   69358 ssh_runner.go:195] Run: cat /version.json
	I0919 22:23:21.282309   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.282391   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:23:21.282539   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.302076   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.302858   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.477003   69358 ssh_runner.go:195] Run: systemctl --version
	I0919 22:23:21.481946   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:23:21.486736   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:23:21.519470   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:23:21.519573   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:23:21.549703   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:23:21.549736   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:23:21.549772   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:23:21.549813   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:23:21.563897   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:23:21.577043   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:23:21.577104   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:23:21.591898   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:23:21.607905   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:23:21.677531   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:23:21.749223   69358 docker.go:234] disabling docker service ...
	I0919 22:23:21.749348   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:23:21.771648   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:23:21.786268   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:23:21.864247   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:23:21.930620   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:23:21.943680   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:23:21.963319   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:23:21.977473   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:23:21.989630   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:23:21.989705   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:23:22.001778   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:22.013415   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:23:22.024683   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:22.036042   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:23:22.047238   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:23:22.060239   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:23:22.074324   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:23:22.087081   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:23:22.099883   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:23:22.110348   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:22.180253   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:23:22.295748   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:23:22.295832   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:23:22.300535   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:23:22.300597   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:23:22.304676   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:23:22.344790   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:23:22.344850   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:22.371338   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:22.400934   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:23:22.402669   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:22.421952   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:23:22.426523   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:22.442415   69358 kubeadm.go:875] updating cluster {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:23:22.442712   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:22.442823   69358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:23:22.482684   69358 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:23:22.482710   69358 containerd.go:534] Images already preloaded, skipping extraction
	I0919 22:23:22.482762   69358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:23:22.518500   69358 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:23:22.518526   69358 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:23:22.518533   69358 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0919 22:23:22.518616   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:23:22.518668   69358 ssh_runner.go:195] Run: sudo crictl info
	I0919 22:23:22.554956   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:22.554993   69358 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:23:22.555004   69358 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:23:22.555029   69358 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-326307 NodeName:ha-326307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:23:22.555176   69358 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-326307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:23:22.555209   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:23:22.555273   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:23:22.568901   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:23:22.569038   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:23:22.569091   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:23:22.580223   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:23:22.580317   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:23:22.591268   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 22:23:22.612688   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:23:22.636770   69358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0919 22:23:22.658657   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:23:22.681384   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:23:22.685531   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:22.698340   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:22.769217   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:23:22.792280   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.2
	I0919 22:23:22.792300   69358 certs.go:194] generating shared ca certs ...
	I0919 22:23:22.792315   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.792509   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:23:22.792553   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:23:22.792563   69358 certs.go:256] generating profile certs ...
	I0919 22:23:22.792630   69358 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:23:22.792643   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt with IP's: []
	I0919 22:23:22.975725   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt ...
	I0919 22:23:22.975759   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt: {Name:mk32bca88dd6748516774b56251f96e4fc38a69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.975973   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key ...
	I0919 22:23:22.975990   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key: {Name:mkc0e836c004e527dbd2787dc00463a0715cf8a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.976108   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226
	I0919 22:23:22.976125   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:23:23.460427   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 ...
	I0919 22:23:23.460460   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226: {Name:mk98859e0e43a6d4b4da591dc89695908954cc81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.460672   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226 ...
	I0919 22:23:23.460693   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226: {Name:mk3473c1668aec72ec5a5598645b70e29415cdd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.460941   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:23:23.461078   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:23:23.461207   69358 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:23:23.461233   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt with IP's: []
	I0919 22:23:23.489621   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt ...
	I0919 22:23:23.489652   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt: {Name:mk06f3b4cfde33781bd7076ead00f94525257452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.489837   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key ...
	I0919 22:23:23.489860   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key: {Name:mk632a617a99ac85bf5a9b022d1173caf8e7b208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.489978   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:23:23.490003   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:23:23.490018   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:23:23.490034   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:23:23.490051   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:23:23.490069   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:23:23.490087   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:23:23.490100   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:23:23.490185   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:23:23.490228   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:23:23.490238   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:23:23.490273   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:23:23.490304   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:23:23.490333   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:23:23.490390   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:23.490435   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.490455   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.490497   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.491033   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:23:23.517815   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:23:23.544857   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:23:23.571386   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:23:23.600966   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:23:23.629855   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:23:23.657907   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:23:23.685564   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:23:23.713503   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:23:23.745344   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:23:23.774311   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:23:23.807603   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:23:23.832523   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:23:23.839649   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:23:23.851364   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.856325   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.856396   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.864469   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:23:23.876649   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:23:23.888129   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.892889   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.892949   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.901167   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:23:23.912487   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:23:23.924831   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.929289   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.929357   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.937110   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:23:23.948517   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:23:23.952948   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:23:23.953011   69358 kubeadm.go:392] StartCluster: {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:23.953080   69358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 22:23:23.953122   69358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:23:23.991138   69358 cri.go:89] found id: ""
	I0919 22:23:23.991247   69358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:23:24.003111   69358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:23:24.013643   69358 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:23:24.013714   69358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:23:24.024557   69358 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:23:24.024576   69358 kubeadm.go:157] found existing configuration files:
	
	I0919 22:23:24.024633   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:23:24.035252   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:23:24.035322   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:23:24.045590   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:23:24.056529   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:23:24.056590   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:23:24.066716   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:23:24.077570   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:23:24.077653   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:23:24.088177   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:23:24.098372   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:23:24.098426   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:23:24.108265   69358 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:23:24.149643   69358 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:23:24.149730   69358 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:23:24.166048   69358 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:23:24.166117   69358 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:23:24.166172   69358 kubeadm.go:310] OS: Linux
	I0919 22:23:24.166213   69358 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:23:24.166275   69358 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:23:24.166357   69358 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:23:24.166446   69358 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:23:24.166536   69358 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:23:24.166608   69358 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:23:24.166683   69358 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:23:24.166760   69358 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:23:24.230351   69358 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:23:24.230487   69358 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:23:24.230602   69358 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:23:24.238806   69358 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:23:24.243498   69358 out.go:252]   - Generating certificates and keys ...
	I0919 22:23:24.243610   69358 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:23:24.243715   69358 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:23:24.335199   69358 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:23:24.361175   69358 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:23:24.769077   69358 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:23:25.053293   69358 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:23:25.392067   69358 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:23:25.392251   69358 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-326307 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:23:25.629558   69358 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:23:25.629706   69358 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-326307 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:23:26.141828   69358 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:23:26.343650   69358 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:23:26.737207   69358 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:23:26.737292   69358 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:23:27.020543   69358 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:23:27.208963   69358 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:23:27.382044   69358 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:23:27.660395   69358 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:23:27.867964   69358 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:23:27.868475   69358 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:23:27.870857   69358 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:23:27.873408   69358 out.go:252]   - Booting up control plane ...
	I0919 22:23:27.873545   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:23:27.873665   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:23:27.873811   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:23:27.884709   69358 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:23:27.884874   69358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:23:27.892815   69358 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:23:27.893043   69358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:23:27.893108   69358 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:23:27.981591   69358 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:23:27.981772   69358 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:23:29.484085   69358 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501867716s
	I0919 22:23:29.488057   69358 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:23:29.488269   69358 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:23:29.488401   69358 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:23:29.488636   69358 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:23:31.058022   69358 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.569932465s
	I0919 22:23:31.762139   69358 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.27419796s
	I0919 22:23:33.991284   69358 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.503282233s
	I0919 22:23:34.005767   69358 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:23:34.017935   69358 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:23:34.032336   69358 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:23:34.032534   69358 kubeadm.go:310] [mark-control-plane] Marking the node ha-326307 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:23:34.042496   69358 kubeadm.go:310] [bootstrap-token] Using token: ym5hq4.pw1tvtip1io4ljbf
	I0919 22:23:34.044381   69358 out.go:252]   - Configuring RBAC rules ...
	I0919 22:23:34.044558   69358 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:23:34.048649   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:23:34.057509   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:23:34.061297   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:23:34.064926   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:23:34.069534   69358 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:23:34.399239   69358 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:23:34.818126   69358 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:23:35.398001   69358 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:23:35.398907   69358 kubeadm.go:310] 
	I0919 22:23:35.399007   69358 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:23:35.399035   69358 kubeadm.go:310] 
	I0919 22:23:35.399120   69358 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:23:35.399149   69358 kubeadm.go:310] 
	I0919 22:23:35.399207   69358 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:23:35.399301   69358 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:23:35.399350   69358 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:23:35.399356   69358 kubeadm.go:310] 
	I0919 22:23:35.399402   69358 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:23:35.399408   69358 kubeadm.go:310] 
	I0919 22:23:35.399470   69358 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:23:35.399481   69358 kubeadm.go:310] 
	I0919 22:23:35.399554   69358 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:23:35.399644   69358 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:23:35.399706   69358 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:23:35.399712   69358 kubeadm.go:310] 
	I0919 22:23:35.399803   69358 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:23:35.399888   69358 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:23:35.399892   69358 kubeadm.go:310] 
	I0919 22:23:35.399971   69358 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ym5hq4.pw1tvtip1io4ljbf \
	I0919 22:23:35.400068   69358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 \
	I0919 22:23:35.400089   69358 kubeadm.go:310] 	--control-plane 
	I0919 22:23:35.400093   69358 kubeadm.go:310] 
	I0919 22:23:35.400204   69358 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:23:35.400217   69358 kubeadm.go:310] 
	I0919 22:23:35.400285   69358 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ym5hq4.pw1tvtip1io4ljbf \
	I0919 22:23:35.400382   69358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 
	I0919 22:23:35.403119   69358 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:23:35.403274   69358 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:23:35.403305   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:35.403317   69358 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:23:35.407302   69358 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:23:35.409983   69358 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:23:35.415011   69358 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:23:35.415039   69358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:23:35.436210   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:23:35.679694   69358 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:23:35.679756   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:35.679779   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307 minikube.k8s.io/updated_at=2025_09_19T22_23_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=true
	I0919 22:23:35.787076   69358 ops.go:34] apiserver oom_adj: -16
	I0919 22:23:35.787237   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:36.287327   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:36.787300   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:37.287415   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:37.788066   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:38.287401   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:38.787731   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.288028   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.788301   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.864456   69358 kubeadm.go:1105] duration metric: took 4.184765822s to wait for elevateKubeSystemPrivileges
	I0919 22:23:39.864500   69358 kubeadm.go:394] duration metric: took 15.911493151s to StartCluster
	I0919 22:23:39.864524   69358 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:39.864601   69358 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:23:39.865911   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:39.866255   69358 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:39.866275   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:23:39.866288   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:23:39.866297   69358 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:23:39.866377   69358 addons.go:69] Setting storage-provisioner=true in profile "ha-326307"
	I0919 22:23:39.866398   69358 addons.go:238] Setting addon storage-provisioner=true in "ha-326307"
	I0919 22:23:39.866400   69358 addons.go:69] Setting default-storageclass=true in profile "ha-326307"
	I0919 22:23:39.866428   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:39.866523   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:39.866434   69358 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-326307"
	I0919 22:23:39.866921   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.867012   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.892851   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:23:39.893863   69358 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:23:39.893944   69358 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:23:39.893953   69358 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:23:39.894002   69358 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:23:39.894061   69358 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:23:39.893888   69358 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:23:39.894642   69358 addons.go:238] Setting addon default-storageclass=true in "ha-326307"
	I0919 22:23:39.894691   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:39.895196   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.895724   69358 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:23:39.897293   69358 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:23:39.897315   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:23:39.897386   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:39.923915   69358 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:23:39.923939   69358 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:23:39.924001   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:39.926323   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:39.953300   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:39.968501   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:23:40.065441   69358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:23:40.083647   69358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:23:40.190461   69358 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:23:40.433561   69358 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:23:40.435567   69358 addons.go:514] duration metric: took 569.25898ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:23:40.435633   69358 start.go:246] waiting for cluster config update ...
	I0919 22:23:40.435651   69358 start.go:255] writing updated cluster config ...
	I0919 22:23:40.437510   69358 out.go:203] 
	I0919 22:23:40.439070   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:40.439141   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:40.441238   69358 out.go:179] * Starting "ha-326307-m02" control-plane node in "ha-326307" cluster
	I0919 22:23:40.443382   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:23:40.445749   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:23:40.447079   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:40.447132   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:23:40.447229   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:23:40.447308   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:23:40.447326   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:23:40.447427   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:40.470325   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:23:40.470347   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:23:40.470366   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:23:40.470391   69358 start.go:360] acquireMachinesLock for ha-326307-m02: {Name:mk4919a9b19250804b0f53d01bcd11efaf9a431f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:23:40.470518   69358 start.go:364] duration metric: took 88.309µs to acquireMachinesLock for "ha-326307-m02"
	I0919 22:23:40.470552   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:40.470618   69358 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:23:40.473495   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:23:40.473607   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:23:40.473631   69358 client.go:168] LocalClient.Create starting
	I0919 22:23:40.473689   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:23:40.473724   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:40.473734   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:40.473828   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:23:40.473853   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:40.473861   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:40.474095   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:40.493916   69358 network_create.go:77] Found existing network {name:ha-326307 subnet:0xc000ad7620 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:23:40.493972   69358 kic.go:121] calculated static IP "192.168.49.3" for the "ha-326307-m02" container
	I0919 22:23:40.494055   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:23:40.516112   69358 cli_runner.go:164] Run: docker volume create ha-326307-m02 --label name.minikube.sigs.k8s.io=ha-326307-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:23:40.537046   69358 oci.go:103] Successfully created a docker volume ha-326307-m02
	I0919 22:23:40.537137   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m02 --entrypoint /usr/bin/test -v ha-326307-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:23:40.991997   69358 oci.go:107] Successfully prepared a docker volume ha-326307-m02
	I0919 22:23:40.992038   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:40.992061   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:23:40.992121   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:23:45.362629   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.370467998s)
	I0919 22:23:45.362666   69358 kic.go:203] duration metric: took 4.370603938s to extract preloaded images to volume ...
	W0919 22:23:45.362777   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:23:45.362811   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:23:45.362846   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:23:45.417833   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307-m02 --name ha-326307-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307-m02 --network ha-326307 --ip 192.168.49.3 --volume ha-326307-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:23:45.744363   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Running}}
	I0919 22:23:45.768456   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:45.789293   69358 cli_runner.go:164] Run: docker exec ha-326307-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:23:45.846760   69358 oci.go:144] the created container "ha-326307-m02" has a running status.
	I0919 22:23:45.846794   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa...
	I0919 22:23:46.005236   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:23:46.005288   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:23:46.042640   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:46.067424   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:23:46.067455   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:23:46.132729   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:46.155854   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:23:46.155967   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.177181   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.177511   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.177533   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:23:46.320054   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:23:46.320089   69358 ubuntu.go:182] provisioning hostname "ha-326307-m02"
	I0919 22:23:46.320185   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.341740   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.341951   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.341965   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m02 && echo "ha-326307-m02" | sudo tee /etc/hostname
	I0919 22:23:46.497123   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:23:46.497234   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.520214   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.520436   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.520455   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:23:46.659417   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:23:46.659458   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:23:46.659492   69358 ubuntu.go:190] setting up certificates
	I0919 22:23:46.659505   69358 provision.go:84] configureAuth start
	I0919 22:23:46.659556   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:46.679498   69358 provision.go:143] copyHostCerts
	I0919 22:23:46.679551   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:46.679598   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:23:46.679605   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:46.679712   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:23:46.679851   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:46.679882   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:23:46.679893   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:46.679947   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:23:46.680043   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:46.680141   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:23:46.680185   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:46.680251   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:23:46.680367   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m02 san=[127.0.0.1 192.168.49.3 ha-326307-m02 localhost minikube]
	I0919 22:23:46.869190   69358 provision.go:177] copyRemoteCerts
	I0919 22:23:46.869251   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:23:46.869285   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.888798   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:46.988385   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:23:46.988452   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:23:47.018227   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:23:47.018299   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:23:47.046810   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:23:47.046866   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:23:47.074372   69358 provision.go:87] duration metric: took 414.855982ms to configureAuth
	I0919 22:23:47.074400   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:23:47.074581   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:47.074598   69358 machine.go:96] duration metric: took 918.712366ms to provisionDockerMachine
	I0919 22:23:47.074607   69358 client.go:171] duration metric: took 6.600969352s to LocalClient.Create
	I0919 22:23:47.074631   69358 start.go:167] duration metric: took 6.601023702s to libmachine.API.Create "ha-326307"
	I0919 22:23:47.074642   69358 start.go:293] postStartSetup for "ha-326307-m02" (driver="docker")
	I0919 22:23:47.074650   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:23:47.074721   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:23:47.074767   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.094538   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.195213   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:23:47.199088   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:23:47.199139   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:23:47.199181   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:23:47.199191   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:23:47.199215   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:23:47.199276   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:23:47.199378   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:23:47.199394   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:23:47.199502   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:23:47.209642   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:47.240945   69358 start.go:296] duration metric: took 166.288086ms for postStartSetup
	I0919 22:23:47.241383   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:47.261061   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:47.261460   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:23:47.261513   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.280359   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.374609   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:23:47.379255   69358 start.go:128] duration metric: took 6.908623332s to createHost
	I0919 22:23:47.379283   69358 start.go:83] releasing machines lock for "ha-326307-m02", held for 6.908753842s
	I0919 22:23:47.379346   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:47.400418   69358 out.go:179] * Found network options:
	I0919 22:23:47.401854   69358 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:23:47.403072   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:23:47.403133   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:23:47.403263   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:23:47.403266   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:23:47.403326   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.403332   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.423928   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.424218   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.597529   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:23:47.630263   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:23:47.630334   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:23:47.661706   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:23:47.661733   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:23:47.661772   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:23:47.661826   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:23:47.675485   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:23:47.687726   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:23:47.687780   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:23:47.701818   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:23:47.717912   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:23:47.789825   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:23:47.863188   69358 docker.go:234] disabling docker service ...
	I0919 22:23:47.863267   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:23:47.881757   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:23:47.893830   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:23:47.963004   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:23:48.034120   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:23:48.046843   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:23:48.065279   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:23:48.078269   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:23:48.089105   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:23:48.089186   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:23:48.099867   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:48.111076   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:23:48.122049   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:48.132648   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:23:48.142263   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:23:48.152876   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:23:48.163459   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:23:48.174096   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:23:48.183483   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:23:48.192780   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:48.261004   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:23:48.364434   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:23:48.364508   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:23:48.368726   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:23:48.368792   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:23:48.372683   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:23:48.409110   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:23:48.409200   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:48.433389   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:48.460529   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:23:48.462207   69358 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:23:48.464087   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:48.482217   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:23:48.486620   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:48.498806   69358 mustload.go:65] Loading cluster: ha-326307
	I0919 22:23:48.499032   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:48.499315   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:48.518576   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:48.518850   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.3
	I0919 22:23:48.518866   69358 certs.go:194] generating shared ca certs ...
	I0919 22:23:48.518885   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.519012   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:23:48.519082   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:23:48.519096   69358 certs.go:256] generating profile certs ...
	I0919 22:23:48.519222   69358 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:23:48.519259   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4
	I0919 22:23:48.519288   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:23:48.963393   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 ...
	I0919 22:23:48.963428   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4: {Name:mk381f64cc0991e3a6417e9586b9565eb7a8dbf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.963635   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4 ...
	I0919 22:23:48.963660   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4: {Name:mk4dbead0b9c36c7a3635520729a1eb2d4b33f13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.963762   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:23:48.963935   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:23:48.964103   69358 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:23:48.964120   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:23:48.964138   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:23:48.964166   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:23:48.964183   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:23:48.964200   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:23:48.964218   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:23:48.964234   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:23:48.964251   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:23:48.964313   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:23:48.964355   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:23:48.964366   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:23:48.964406   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:23:48.964438   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:23:48.964471   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:23:48.964528   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:48.964570   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:23:48.964592   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:23:48.964612   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:48.964731   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:48.983907   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:49.073692   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:23:49.078819   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:23:49.094234   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:23:49.099593   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:23:49.113663   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:23:49.117744   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:23:49.133048   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:23:49.136861   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:23:49.150734   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:23:49.154901   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:23:49.169388   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:23:49.173566   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:23:49.188070   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:23:49.215594   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:23:49.243561   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:23:49.271624   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:23:49.301814   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:23:49.332556   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:23:49.360723   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:23:49.388872   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:23:49.417316   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:23:49.448722   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:23:49.476877   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:23:49.504914   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:23:49.524969   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:23:49.544942   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:23:49.564506   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:23:49.584887   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:23:49.605725   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:23:49.625552   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:23:49.645811   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:23:49.652062   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:23:49.664544   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.668823   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.668889   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.676892   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:23:49.688737   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:23:49.699741   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.703762   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.703823   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.711311   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:23:49.721987   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:23:49.732874   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.737289   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.737351   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.745312   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:23:49.756384   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:23:49.760242   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:23:49.760315   69358 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0919 22:23:49.760415   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:23:49.760438   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:23:49.760476   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:23:49.773427   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:23:49.773499   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:23:49.773549   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:23:49.784237   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:23:49.784306   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:23:49.794534   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:23:49.814529   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:23:49.837846   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:23:49.859421   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:23:49.863859   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:49.876721   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:49.948089   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:23:49.971010   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:49.971327   69358 start.go:317] joinCluster: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:49.971508   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:23:49.971618   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:49.992535   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:50.137695   69358 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:50.137740   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kb90tj.om7zof6htice1y8z --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:24:08.633363   69358 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kb90tj.om7zof6htice1y8z --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.495537277s)
	I0919 22:24:08.633404   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:24:08.849981   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307-m02 minikube.k8s.io/updated_at=2025_09_19T22_24_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=false
	I0919 22:24:08.928109   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-326307-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:24:09.011507   69358 start.go:319] duration metric: took 19.040175049s to joinCluster
	I0919 22:24:09.011590   69358 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:09.011816   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:09.013756   69358 out.go:179] * Verifying Kubernetes components...
	I0919 22:24:09.015232   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:09.115618   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:09.130578   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:24:09.130645   69358 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:24:09.130869   69358 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m02" to be "Ready" ...
	W0919 22:24:11.134373   69358 node_ready.go:57] node "ha-326307-m02" has "Ready":"False" status (will retry)
	I0919 22:24:11.634655   69358 node_ready.go:49] node "ha-326307-m02" is "Ready"
	I0919 22:24:11.634683   69358 node_ready.go:38] duration metric: took 2.503796185s for node "ha-326307-m02" to be "Ready" ...
	I0919 22:24:11.634697   69358 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:24:11.634751   69358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:24:11.647782   69358 api_server.go:72] duration metric: took 2.636155477s to wait for apiserver process to appear ...
	I0919 22:24:11.647812   69358 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:24:11.647848   69358 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:24:11.652005   69358 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:24:11.652952   69358 api_server.go:141] control plane version: v1.34.0
	I0919 22:24:11.652975   69358 api_server.go:131] duration metric: took 5.15649ms to wait for apiserver health ...
	I0919 22:24:11.652984   69358 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:24:11.657535   69358 system_pods.go:59] 17 kube-system pods found
	I0919 22:24:11.657569   69358 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:11.657577   69358 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:11.657581   69358 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:11.657586   69358 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Pending
	I0919 22:24:11.657591   69358 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:11.657598   69358 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Pending: PodScheduled:SchedulerError (pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed)
	I0919 22:24:11.657604   69358 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:11.657609   69358 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Pending
	I0919 22:24:11.657616   69358 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:11.657621   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Pending
	I0919 22:24:11.657626   69358 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:11.657636   69358 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q8mtj": pod kube-proxy-q8mtj is already assigned to node "ha-326307-m02")
	I0919 22:24:11.657642   69358 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:11.657649   69358 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Pending
	I0919 22:24:11.657654   69358 system_pods.go:61] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:11.657660   69358 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Pending
	I0919 22:24:11.657665   69358 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:11.657673   69358 system_pods.go:74] duration metric: took 4.68298ms to wait for pod list to return data ...
	I0919 22:24:11.657687   69358 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:24:11.660430   69358 default_sa.go:45] found service account: "default"
	I0919 22:24:11.660456   69358 default_sa.go:55] duration metric: took 2.762581ms for default service account to be created ...
	I0919 22:24:11.660467   69358 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:24:11.664515   69358 system_pods.go:86] 17 kube-system pods found
	I0919 22:24:11.664549   69358 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:11.664557   69358 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:11.664563   69358 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:11.664567   69358 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Pending
	I0919 22:24:11.664574   69358 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:11.664583   69358 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Pending: PodScheduled:SchedulerError (pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed)
	I0919 22:24:11.664590   69358 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:11.664594   69358 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Pending
	I0919 22:24:11.664600   69358 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:11.664606   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Pending
	I0919 22:24:11.664615   69358 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:11.664623   69358 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q8mtj": pod kube-proxy-q8mtj is already assigned to node "ha-326307-m02")
	I0919 22:24:11.664629   69358 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:11.664637   69358 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Pending
	I0919 22:24:11.664643   69358 system_pods.go:89] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:11.664649   69358 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Pending
	I0919 22:24:11.664653   69358 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:11.664663   69358 system_pods.go:126] duration metric: took 4.189005ms to wait for k8s-apps to be running ...
	I0919 22:24:11.664676   69358 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:24:11.664734   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:24:11.677679   69358 system_svc.go:56] duration metric: took 12.991783ms WaitForService to wait for kubelet
	I0919 22:24:11.677718   69358 kubeadm.go:578] duration metric: took 2.666095008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:11.677741   69358 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:24:11.681219   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:11.681249   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:11.681276   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:11.681282   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:11.681288   69358 node_conditions.go:105] duration metric: took 3.540774ms to run NodePressure ...
	I0919 22:24:11.681302   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:24:11.681336   69358 start.go:255] writing updated cluster config ...
	I0919 22:24:11.683465   69358 out.go:203] 
	I0919 22:24:11.685336   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:11.685480   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:11.687190   69358 out.go:179] * Starting "ha-326307-m03" control-plane node in "ha-326307" cluster
	I0919 22:24:11.688774   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:24:11.690230   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:11.691529   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:24:11.691564   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:11.691570   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:11.691776   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:11.691792   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:24:11.691940   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:11.714494   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:11.714516   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:11.714538   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:11.714564   69358 start.go:360] acquireMachinesLock for ha-326307-m03: {Name:mk07818636650a6efffb19d787e84a34d6f1dd98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:11.714717   69358 start.go:364] duration metric: took 129.412µs to acquireMachinesLock for "ha-326307-m03"
	I0919 22:24:11.714749   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:11.714883   69358 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:24:11.717146   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:11.717288   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:24:11.717325   69358 client.go:168] LocalClient.Create starting
	I0919 22:24:11.717396   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:24:11.717429   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:11.717444   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:11.717499   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:24:11.717523   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:11.717531   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:11.717757   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:11.736709   69358 network_create.go:77] Found existing network {name:ha-326307 subnet:0xc001c6a9f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:11.736749   69358 kic.go:121] calculated static IP "192.168.49.4" for the "ha-326307-m03" container
	I0919 22:24:11.736838   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:11.757855   69358 cli_runner.go:164] Run: docker volume create ha-326307-m03 --label name.minikube.sigs.k8s.io=ha-326307-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:11.780198   69358 oci.go:103] Successfully created a docker volume ha-326307-m03
	I0919 22:24:11.780287   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m03 --entrypoint /usr/bin/test -v ha-326307-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:12.269719   69358 oci.go:107] Successfully prepared a docker volume ha-326307-m03
	I0919 22:24:12.269772   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:24:12.269795   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:12.269864   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:16.658999   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.389088771s)
	I0919 22:24:16.659030   69358 kic.go:203] duration metric: took 4.389232064s to extract preloaded images to volume ...
	W0919 22:24:16.659114   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:16.659151   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:16.659211   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:16.714324   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307-m03 --name ha-326307-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307-m03 --network ha-326307 --ip 192.168.49.4 --volume ha-326307-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:17.029039   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Running}}
	I0919 22:24:17.050534   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.070017   69358 cli_runner.go:164] Run: docker exec ha-326307-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:17.125252   69358 oci.go:144] the created container "ha-326307-m03" has a running status.
	I0919 22:24:17.125293   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa...
	I0919 22:24:17.618351   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:17.618395   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:17.646956   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.667176   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:17.667203   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:17.713667   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.734276   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:17.734370   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:17.755726   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:17.755941   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:17.755953   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:17.894482   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:24:17.894512   69358 ubuntu.go:182] provisioning hostname "ha-326307-m03"
	I0919 22:24:17.894572   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:17.914204   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:17.914507   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:17.914530   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m03 && echo "ha-326307-m03" | sudo tee /etc/hostname
	I0919 22:24:18.068724   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:24:18.068805   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.088244   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:18.088504   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:18.088525   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:18.227353   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:18.227390   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:24:18.227421   69358 ubuntu.go:190] setting up certificates
	I0919 22:24:18.227433   69358 provision.go:84] configureAuth start
	I0919 22:24:18.227496   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.247948   69358 provision.go:143] copyHostCerts
	I0919 22:24:18.247989   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:24:18.248023   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:24:18.248029   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:24:18.248096   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:24:18.248231   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:24:18.248289   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:24:18.248299   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:24:18.248338   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:24:18.248404   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:24:18.248423   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:24:18.248427   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:24:18.248457   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:24:18.248512   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m03 san=[127.0.0.1 192.168.49.4 ha-326307-m03 localhost minikube]
	I0919 22:24:18.393257   69358 provision.go:177] copyRemoteCerts
	I0919 22:24:18.393319   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:18.393353   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.412748   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.514005   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:18.514092   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:24:18.542657   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:18.542733   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:18.569691   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:18.569759   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:18.596329   69358 provision.go:87] duration metric: took 368.876183ms to configureAuth
	I0919 22:24:18.596357   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:18.596551   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:18.596562   69358 machine.go:96] duration metric: took 862.263986ms to provisionDockerMachine
	I0919 22:24:18.596567   69358 client.go:171] duration metric: took 6.879237415s to LocalClient.Create
	I0919 22:24:18.596586   69358 start.go:167] duration metric: took 6.879300568s to libmachine.API.Create "ha-326307"
	I0919 22:24:18.596594   69358 start.go:293] postStartSetup for "ha-326307-m03" (driver="docker")
	I0919 22:24:18.596602   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:18.596644   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:18.596677   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.615349   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.717907   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:18.722093   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:18.722137   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:18.722150   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:18.722173   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:18.722186   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:24:18.722248   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:24:18.722356   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:24:18.722372   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:24:18.722580   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:18.732899   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:24:18.766453   69358 start.go:296] duration metric: took 169.843532ms for postStartSetup
	I0919 22:24:18.766899   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.786322   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:18.786775   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:18.786833   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.806377   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.901798   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:18.907121   69358 start.go:128] duration metric: took 7.192223106s to createHost
	I0919 22:24:18.907180   69358 start.go:83] releasing machines lock for "ha-326307-m03", held for 7.192445142s
	I0919 22:24:18.907266   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.929545   69358 out.go:179] * Found network options:
	I0919 22:24:18.931020   69358 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:24:18.932299   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932334   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932375   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932396   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:18.932501   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:18.932558   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.932588   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:18.932662   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.952990   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.953400   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:19.131622   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:19.165991   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:19.166079   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:19.197850   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:19.197878   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:24:19.197909   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:19.197960   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:24:19.211538   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:19.223959   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:24:19.224009   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:24:19.239088   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:24:19.254102   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:24:19.328965   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:24:19.406808   69358 docker.go:234] disabling docker service ...
	I0919 22:24:19.406888   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:24:19.425948   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:24:19.438801   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:24:19.510941   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:24:19.581470   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:19.594683   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:19.613666   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:19.627192   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:19.638603   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:19.638668   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:19.649965   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:19.661530   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:19.673111   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:19.684782   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:19.696056   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:19.707630   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:19.719687   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:19.731477   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:19.741738   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:19.751963   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:19.822277   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:19.931918   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:24:19.931995   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:24:19.936531   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:24:19.936591   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:24:19.940632   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:19.977944   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:24:19.978013   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:24:20.003290   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:24:20.032714   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:24:20.034190   69358 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:20.035560   69358 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:24:20.036915   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:20.055444   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:20.059762   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:20.072851   69358 mustload.go:65] Loading cluster: ha-326307
	I0919 22:24:20.073081   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:20.073298   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:24:20.091365   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:24:20.091605   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.4
	I0919 22:24:20.091616   69358 certs.go:194] generating shared ca certs ...
	I0919 22:24:20.091629   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.091746   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:24:20.091786   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:24:20.091796   69358 certs.go:256] generating profile certs ...
	I0919 22:24:20.091865   69358 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:24:20.091891   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604
	I0919 22:24:20.091905   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:24:20.372898   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 ...
	I0919 22:24:20.372943   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604: {Name:mk9b724916886d4c69140cc45e23ce082460d116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.373186   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604 ...
	I0919 22:24:20.373210   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604: {Name:mkfc0cd42f96faa2f697a81fc7ca671182c3cea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.373311   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:24:20.373471   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:24:20.373649   69358 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:24:20.373668   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:20.373682   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:20.373692   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:20.373703   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:20.373713   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:20.373723   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:20.373733   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:20.373743   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:20.373795   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:24:20.373823   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:20.373832   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:24:20.373856   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:24:20.373878   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:20.373899   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:20.373936   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:24:20.373962   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:24:20.373976   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:20.373987   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:24:20.374034   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:24:20.394051   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:24:20.484593   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:20.489010   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:20.503471   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:20.507649   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:24:20.522195   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:20.526410   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:20.541840   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:20.546043   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:24:20.560364   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:20.564230   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:20.577547   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:20.581387   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:20.594800   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:20.622991   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:20.651461   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:20.678113   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:20.705292   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:24:20.732489   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:20.762310   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:20.789808   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:20.819251   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:24:20.851010   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:20.879714   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:24:20.908177   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:20.928644   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:24:20.949340   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:20.969391   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:24:20.989837   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:21.011118   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:21.031485   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:21.052354   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:24:21.058486   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:24:21.069582   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.074372   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.074440   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.082186   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:21.092957   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:24:21.104085   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.108193   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.108258   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.116078   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:21.127607   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:21.139338   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.143794   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.143848   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.151321   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:21.162759   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:21.166499   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:21.166555   69358 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0919 22:24:21.166642   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:21.166677   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:21.166738   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:21.180123   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:21.180202   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:21.180261   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:21.189900   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:21.189963   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:21.200336   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:24:21.220715   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:21.244525   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:21.268789   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:21.272885   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:21.285764   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:21.362911   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:21.394403   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:24:21.394691   69358 start.go:317] joinCluster: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.394850   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:21.394898   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:24:21.419020   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:24:21.569927   69358 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:21.569980   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uhifqr.okdtfjqzhuoxbb2e --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:24:32.089764   69358 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uhifqr.okdtfjqzhuoxbb2e --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.519762438s)
	I0919 22:24:32.089793   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:24:32.309566   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307-m03 minikube.k8s.io/updated_at=2025_09_19T22_24_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=false
	I0919 22:24:32.391142   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-326307-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:24:32.471336   69358 start.go:319] duration metric: took 11.076641052s to joinCluster
	I0919 22:24:32.471402   69358 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:32.471770   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:32.473461   69358 out.go:179] * Verifying Kubernetes components...
	I0919 22:24:32.475427   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:32.579664   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:32.593786   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:24:32.593856   69358 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:24:32.594084   69358 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m03" to be "Ready" ...
	W0919 22:24:34.597297   69358 node_ready.go:57] node "ha-326307-m03" has "Ready":"False" status (will retry)
	I0919 22:24:35.098269   69358 node_ready.go:49] node "ha-326307-m03" is "Ready"
	I0919 22:24:35.098296   69358 node_ready.go:38] duration metric: took 2.504196997s for node "ha-326307-m03" to be "Ready" ...
	I0919 22:24:35.098310   69358 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:24:35.098358   69358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:24:35.111440   69358 api_server.go:72] duration metric: took 2.640014462s to wait for apiserver process to appear ...
	I0919 22:24:35.111465   69358 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:24:35.111483   69358 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:24:35.115724   69358 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:24:35.116810   69358 api_server.go:141] control plane version: v1.34.0
	I0919 22:24:35.116837   69358 api_server.go:131] duration metric: took 5.364462ms to wait for apiserver health ...
	I0919 22:24:35.116849   69358 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:24:35.123343   69358 system_pods.go:59] 27 kube-system pods found
	I0919 22:24:35.123372   69358 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:35.123377   69358 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:35.123380   69358 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:35.123384   69358 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:24:35.123387   69358 system_pods.go:61] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Pending
	I0919 22:24:35.123390   69358 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:35.123393   69358 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:24:35.123400   69358 system_pods.go:61] "kindnet-pnj9r" [14a458fc-0e9d-42e9-9473-f7f2a6f7f571] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-pnj9r": pod kindnet-pnj9r is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123408   69358 system_pods.go:61] "kindnet-qxwpq" [173e48ec-ef56-4824-9f55-a04b199b7943] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qxwpq": pod kindnet-qxwpq is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123416   69358 system_pods.go:61] "kindnet-wcct9" [5472dcae-344b-43fb-84d1-8d0d41852cd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wcct9": pod kindnet-wcct9 is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123427   69358 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:35.123433   69358 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:24:35.123445   69358 system_pods.go:61] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Pending
	I0919 22:24:35.123450   69358 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:35.123454   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running
	I0919 22:24:35.123457   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Pending
	I0919 22:24:35.123461   69358 system_pods.go:61] "kube-proxy-6nmjx" [81414747-6c4e-495e-a28d-cb17f0c0c306] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-6nmjx": pod kube-proxy-6nmjx is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123465   69358 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:35.123469   69358 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:24:35.123472   69358 system_pods.go:61] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ws89d": pod kube-proxy-ws89d is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123477   69358 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:35.123481   69358 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running
	I0919 22:24:35.123487   69358 system_pods.go:61] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Pending
	I0919 22:24:35.123489   69358 system_pods.go:61] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:35.123492   69358 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:24:35.123496   69358 system_pods.go:61] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Pending
	I0919 22:24:35.123503   69358 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:35.123511   69358 system_pods.go:74] duration metric: took 6.65469ms to wait for pod list to return data ...
	I0919 22:24:35.123525   69358 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:24:35.126592   69358 default_sa.go:45] found service account: "default"
	I0919 22:24:35.126616   69358 default_sa.go:55] duration metric: took 3.083846ms for default service account to be created ...
	I0919 22:24:35.126627   69358 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:24:35.131895   69358 system_pods.go:86] 27 kube-system pods found
	I0919 22:24:35.131928   69358 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:35.131936   69358 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:35.131941   69358 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:35.131946   69358 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:24:35.131950   69358 system_pods.go:89] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Pending
	I0919 22:24:35.131954   69358 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:35.131959   69358 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:24:35.131968   69358 system_pods.go:89] "kindnet-pnj9r" [14a458fc-0e9d-42e9-9473-f7f2a6f7f571] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-pnj9r": pod kindnet-pnj9r is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131975   69358 system_pods.go:89] "kindnet-qxwpq" [173e48ec-ef56-4824-9f55-a04b199b7943] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qxwpq": pod kindnet-qxwpq is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131986   69358 system_pods.go:89] "kindnet-wcct9" [5472dcae-344b-43fb-84d1-8d0d41852cd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wcct9": pod kindnet-wcct9 is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131993   69358 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:35.132003   69358 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:24:35.132009   69358 system_pods.go:89] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Pending
	I0919 22:24:35.132015   69358 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:35.132022   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running
	I0919 22:24:35.132028   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Pending
	I0919 22:24:35.132035   69358 system_pods.go:89] "kube-proxy-6nmjx" [81414747-6c4e-495e-a28d-cb17f0c0c306] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-6nmjx": pod kube-proxy-6nmjx is already assigned to node "ha-326307-m03")
	I0919 22:24:35.132044   69358 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:35.132050   69358 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:24:35.132057   69358 system_pods.go:89] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ws89d": pod kube-proxy-ws89d is already assigned to node "ha-326307-m03")
	I0919 22:24:35.132067   69358 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:35.132076   69358 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running
	I0919 22:24:35.132082   69358 system_pods.go:89] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Pending
	I0919 22:24:35.132090   69358 system_pods.go:89] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:35.132096   69358 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:24:35.132101   69358 system_pods.go:89] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Pending
	I0919 22:24:35.132107   69358 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:35.132117   69358 system_pods.go:126] duration metric: took 5.483041ms to wait for k8s-apps to be running ...
	I0919 22:24:35.132130   69358 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:24:35.132201   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:24:35.145901   69358 system_svc.go:56] duration metric: took 13.762213ms WaitForService to wait for kubelet
	I0919 22:24:35.145934   69358 kubeadm.go:578] duration metric: took 2.67451015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:35.145953   69358 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:24:35.149091   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149114   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149122   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149126   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149129   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149133   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149137   69358 node_conditions.go:105] duration metric: took 3.180117ms to run NodePressure ...
	I0919 22:24:35.149147   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:24:35.149187   69358 start.go:255] writing updated cluster config ...
	I0919 22:24:35.149520   69358 ssh_runner.go:195] Run: rm -f paused
	I0919 22:24:35.153920   69358 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:24:35.154452   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:35.158459   69358 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9j5pw" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.164361   69358 pod_ready.go:94] pod "coredns-66bc5c9577-9j5pw" is "Ready"
	I0919 22:24:35.164388   69358 pod_ready.go:86] duration metric: took 5.90604ms for pod "coredns-66bc5c9577-9j5pw" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.164396   69358 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wqvzd" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.170275   69358 pod_ready.go:94] pod "coredns-66bc5c9577-wqvzd" is "Ready"
	I0919 22:24:35.170305   69358 pod_ready.go:86] duration metric: took 5.903438ms for pod "coredns-66bc5c9577-wqvzd" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.221651   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.227692   69358 pod_ready.go:94] pod "etcd-ha-326307" is "Ready"
	I0919 22:24:35.227721   69358 pod_ready.go:86] duration metric: took 6.035355ms for pod "etcd-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.227738   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.234705   69358 pod_ready.go:94] pod "etcd-ha-326307-m02" is "Ready"
	I0919 22:24:35.234755   69358 pod_ready.go:86] duration metric: took 6.991962ms for pod "etcd-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.234769   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.355285   69358 request.go:683] "Waited before sending request" delay="120.371513ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326307-m03"
	I0919 22:24:35.555444   69358 request.go:683] "Waited before sending request" delay="196.344855ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:35.955374   69358 request.go:683] "Waited before sending request" delay="196.276117ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:35.958866   69358 pod_ready.go:94] pod "etcd-ha-326307-m03" is "Ready"
	I0919 22:24:35.958897   69358 pod_ready.go:86] duration metric: took 724.121102ms for pod "etcd-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.155371   69358 request.go:683] "Waited before sending request" delay="196.353052ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:24:36.158952   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.355354   69358 request.go:683] "Waited before sending request" delay="196.272183ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307"
	I0919 22:24:36.555231   69358 request.go:683] "Waited before sending request" delay="196.389456ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307"
	I0919 22:24:36.558900   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307" is "Ready"
	I0919 22:24:36.558927   69358 pod_ready.go:86] duration metric: took 399.940435ms for pod "kube-apiserver-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.558936   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.755357   69358 request.go:683] "Waited before sending request" delay="196.333509ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307-m02"
	I0919 22:24:36.955622   69358 request.go:683] "Waited before sending request" delay="196.371107ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:36.958850   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307-m02" is "Ready"
	I0919 22:24:36.958881   69358 pod_ready.go:86] duration metric: took 399.937855ms for pod "kube-apiserver-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.958892   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.155391   69358 request.go:683] "Waited before sending request" delay="196.40338ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307-m03"
	I0919 22:24:37.355336   69358 request.go:683] "Waited before sending request" delay="196.255836ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:37.358527   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307-m03" is "Ready"
	I0919 22:24:37.358558   69358 pod_ready.go:86] duration metric: took 399.659411ms for pod "kube-apiserver-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.555013   69358 request.go:683] "Waited before sending request" delay="196.298446ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0919 22:24:37.559362   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.755832   69358 request.go:683] "Waited before sending request" delay="196.350309ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307"
	I0919 22:24:37.954837   69358 request.go:683] "Waited before sending request" delay="195.286624ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307"
	I0919 22:24:37.958236   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307" is "Ready"
	I0919 22:24:37.958266   69358 pod_ready.go:86] duration metric: took 398.878465ms for pod "kube-controller-manager-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.958274   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.155758   69358 request.go:683] "Waited before sending request" delay="197.394867ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m02"
	I0919 22:24:38.355929   69358 request.go:683] "Waited before sending request" delay="196.396129ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:38.359268   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307-m02" is "Ready"
	I0919 22:24:38.359292   69358 pod_ready.go:86] duration metric: took 401.013168ms for pod "kube-controller-manager-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.359301   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.555606   69358 request.go:683] "Waited before sending request" delay="196.234039ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m03"
	I0919 22:24:38.755574   69358 request.go:683] "Waited before sending request" delay="196.387697ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:38.955366   69358 request.go:683] "Waited before sending request" delay="95.227976ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m03"
	I0919 22:24:39.154881   69358 request.go:683] "Waited before sending request" delay="196.301821ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:39.555649   69358 request.go:683] "Waited before sending request" delay="192.377634ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:39.955251   69358 request.go:683] "Waited before sending request" delay="92.286577ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	W0919 22:24:40.366591   69358 pod_ready.go:104] pod "kube-controller-manager-ha-326307-m03" is not "Ready", error: <nil>
	W0919 22:24:42.367386   69358 pod_ready.go:104] pod "kube-controller-manager-ha-326307-m03" is not "Ready", error: <nil>
	I0919 22:24:43.367824   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307-m03" is "Ready"
	I0919 22:24:43.367860   69358 pod_ready.go:86] duration metric: took 5.00855284s for pod "kube-controller-manager-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.371145   69358 pod_ready.go:83] waiting for pod "kube-proxy-8kxtv" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.376946   69358 pod_ready.go:94] pod "kube-proxy-8kxtv" is "Ready"
	I0919 22:24:43.376975   69358 pod_ready.go:86] duration metric: took 5.786362ms for pod "kube-proxy-8kxtv" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.376985   69358 pod_ready.go:83] waiting for pod "kube-proxy-q8mtj" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.555396   69358 request.go:683] "Waited before sending request" delay="178.323112ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8mtj"
	I0919 22:24:43.755331   69358 request.go:683] "Waited before sending request" delay="196.35612ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:43.758666   69358 pod_ready.go:94] pod "kube-proxy-q8mtj" is "Ready"
	I0919 22:24:43.758695   69358 pod_ready.go:86] duration metric: took 381.70368ms for pod "kube-proxy-q8mtj" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.758704   69358 pod_ready.go:83] waiting for pod "kube-proxy-ws89d" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.955265   69358 request.go:683] "Waited before sending request" delay="196.399278ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ws89d"
	I0919 22:24:44.155007   69358 request.go:683] "Waited before sending request" delay="196.303687ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:44.354881   69358 request.go:683] "Waited before sending request" delay="95.2124ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ws89d"
	I0919 22:24:44.555609   69358 request.go:683] "Waited before sending request" delay="197.246504ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:44.955613   69358 request.go:683] "Waited before sending request" delay="192.471154ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:45.355390   69358 request.go:683] "Waited before sending request" delay="92.281537ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	W0919 22:24:45.765195   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:48.265294   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:50.765471   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:53.265410   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:55.265474   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:57.765267   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:59.765483   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:02.266617   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:04.766256   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:07.265177   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:09.265694   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:11.765032   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:13.765313   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:15.766278   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	I0919 22:25:17.764644   69358 pod_ready.go:94] pod "kube-proxy-ws89d" is "Ready"
	I0919 22:25:17.764670   69358 pod_ready.go:86] duration metric: took 34.005951783s for pod "kube-proxy-ws89d" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.767738   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.772985   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307" is "Ready"
	I0919 22:25:17.773015   69358 pod_ready.go:86] duration metric: took 5.246042ms for pod "kube-scheduler-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.773023   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.778916   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307-m02" is "Ready"
	I0919 22:25:17.778942   69358 pod_ready.go:86] duration metric: took 5.914033ms for pod "kube-scheduler-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.778951   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.784122   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307-m03" is "Ready"
	I0919 22:25:17.784165   69358 pod_ready.go:86] duration metric: took 5.193982ms for pod "kube-scheduler-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.784183   69358 pod_ready.go:40] duration metric: took 42.630226972s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:17.833559   69358 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 22:25:17.835536   69358 out.go:179] * Done! kubectl is now configured to use "ha-326307" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7791f71e5d5a5       8c811b4aec35f       12 minutes ago      Running             busybox                   0                   b5e0c0fffea25       busybox-7b57f96db7-m8swj
	ca68bbc020e20       52546a367cc9e       13 minutes ago      Running             coredns                   0                   132023f334782       coredns-66bc5c9577-9j5pw
	1f618dc8f0392       52546a367cc9e       13 minutes ago      Running             coredns                   0                   a5ac32b4949ab       coredns-66bc5c9577-wqvzd
	f52d2d9f5881b       6e38f40d628db       13 minutes ago      Running             storage-provisioner       0                   7b77cca917bf4       storage-provisioner
	365cc00c2e009       409467f978b4a       13 minutes ago      Running             kindnet-cni               0                   96e027ec2b5fb       kindnet-gxnzs
	bd9e41958ffbb       df0860106674d       13 minutes ago      Running             kube-proxy                0                   06da62af16945       kube-proxy-8kxtv
	c6c963d9a0cae       765655ea60781       13 minutes ago      Running             kube-vip                  0                   5717652da0ef4       kube-vip-ha-326307
	456a0c3cbf5ce       46169d968e920       13 minutes ago      Running             kube-scheduler            0                   f02b9e82ff9b1       kube-scheduler-ha-326307
	05ab0247624a7       a0af72f2ec6d6       13 minutes ago      Running             kube-controller-manager   0                   6026f58e8c23a       kube-controller-manager-ha-326307
	e5c59a6abe977       5f1f5298c888d       13 minutes ago      Running             etcd                      0                   5f89382a468ad       etcd-ha-326307
	e80d65e3c7c18       90550c43ad2bc       13 minutes ago      Running             kube-apiserver            0                   3813626701bd1       kube-apiserver-ha-326307
	
	
	==> containerd <==
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.754439323Z" level=info msg="CreateContainer within sandbox \"a5ac32b4949abcc8c1007cd2947e92633d80d759aeaf0e7d6b490f2610f81170\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.768027085Z" level=info msg="CreateContainer within sandbox \"a5ac32b4949abcc8c1007cd2947e92633d80d759aeaf0e7d6b490f2610f81170\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\""
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.768844132Z" level=info msg="StartContainer for \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\""
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.836885904Z" level=info msg="StartContainer for \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\" returns successfully"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.632881043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9j5pw,Uid:7d073e38-b63e-494d-bda0-3dde372a950b,Namespace:kube-system,Attempt:0,}"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.759782586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9j5pw,Uid:7d073e38-b63e-494d-bda0-3dde372a950b,Namespace:kube-system,Attempt:0,} returns sandbox id \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.765750080Z" level=info msg="CreateContainer within sandbox \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.779792584Z" level=info msg="CreateContainer within sandbox \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.780572301Z" level=info msg="StartContainer for \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.854015268Z" level=info msg="StartContainer for \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\" returns successfully"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.151709073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-m8swj,Uid:7533a5f9-7c6d-4476-9e03-eb8abe0aadbc,Namespace:default,Attempt:0,}"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.267660233Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.268098400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-m8swj,Uid:7533a5f9-7c6d-4476-9e03-eb8abe0aadbc,Namespace:default,Attempt:0,} returns sandbox id \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\""
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.270196453Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.412014033Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.413088793Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.414707234Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.417602556Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.418335313Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 2.148090964s"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.418383876Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.423388311Z" level=info msg="CreateContainer within sandbox \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.442455841Z" level=info msg="CreateContainer within sandbox \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.443119612Z" level=info msg="StartContainer for \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.497884940Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.500641712Z" level=info msg="StartContainer for \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\" returns successfully"
	
	
	==> coredns [1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54337 - 24572 "HINFO IN 5143313645322175939.5313042790825403134. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069732464s
	[INFO] 10.244.0.4:35490 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000326279s
	[INFO] 10.244.0.4:55330 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.014239882s
	[INFO] 10.244.1.2:39628 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210602s
	[INFO] 10.244.1.2:46891 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.001261026s
	[INFO] 10.244.1.2:43124 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.00098216s
	[INFO] 10.244.1.2:49555 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.00013424s
	[INFO] 10.244.0.4:40362 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00024135s
	[INFO] 10.244.0.4:45629 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168694s
	[INFO] 10.244.1.2:52354 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189457s
	[INFO] 10.244.1.2:43857 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161715s
	[INFO] 10.244.1.2:51922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145764s
	[INFO] 10.244.1.2:57320 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009888497s
	[INFO] 10.244.1.2:49841 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169285s
	[INFO] 10.244.0.4:51548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159656s
	[INFO] 10.244.0.4:48681 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110507s
	[INFO] 10.244.1.2:52993 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137337s
	
	
	==> coredns [ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49588 - 50300 "HINFO IN 9047056621409016881.2982736294753326061. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063128768s
	[INFO] 10.244.0.4:39328 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.014249598s
	[INFO] 10.244.0.4:59759 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.013332957s
	[INFO] 10.244.0.4:50336 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.008865788s
	[INFO] 10.244.1.2:42753 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000158745s
	[INFO] 10.244.0.4:52334 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261159s
	[INFO] 10.244.0.4:43558 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010645205s
	[INFO] 10.244.0.4:51059 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154122s
	[INFO] 10.244.0.4:46147 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012594143s
	[INFO] 10.244.0.4:39163 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143755s
	[INFO] 10.244.0.4:57061 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014731s
	[INFO] 10.244.1.2:59502 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000129746s
	[INFO] 10.244.1.2:49570 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172915s
	[INFO] 10.244.1.2:48519 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000175653s
	[INFO] 10.244.0.4:50569 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000326714s
	[INFO] 10.244.0.4:45465 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000234038s
	[INFO] 10.244.1.2:52569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176154s
	[INFO] 10.244.1.2:36719 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000205481s
	[INFO] 10.244.1.2:58705 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195468s
	
	
	==> describe nodes <==
	Name:               ha-326307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_23_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:23:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:37:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-326307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 2616418f44a84ee78b49dce19e95d1fb
	  System UUID:                9c3f30ed-68b2-4a1c-af95-9031ae210a78
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-m8swj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-9j5pw             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 coredns-66bc5c9577-wqvzd             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 etcd-ha-326307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-gxnzs                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-326307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-326307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-8kxtv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-326307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-326307                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           12m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	
	
	Name:               ha-326307-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:37:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-326307-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 e4f3b60b3b464269bc193e23d4361613
	  System UUID:                8095cd89-f43b-4d8a-adef-b40d6aaa7ad2
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tfpvf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-326307-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-mk6pv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-326307-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-326307-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-q8mtj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-326307-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-326307-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        13m   kube-proxy       
	  Normal  RegisteredNode  13m   node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode  12m   node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	
	
	Name:               ha-326307-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:37:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-326307-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 1434e19b2a274233a619428a76d99322
	  System UUID:                5814a8d4-c435-490f-8e5e-a8b038e01be7
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-jdczt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-326307-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-dmxl8                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-326307-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-326307-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-ws89d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-326307-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-326307-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  12m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode  12m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode  12m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	
	
	==> dmesg <==
	[Sep19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001853] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001006] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.090013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.467231] i8042: Warning: Keylock active
	[  +0.009747] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004210] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001075] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000901] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000908] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001042] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001465] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.534799] block sda: the capability attribute has been deprecated.
	[  +0.099617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027269] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.089616] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [e5c59a6abe97751de42afd27010936d1c3a401fad6cd730e75a1692a895b4fbc] <==
	{"level":"warn","ts":"2025-09-19T22:24:25.337105Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:25.337366Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:24:25.352476Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5512420eb470d1ce","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-09-19T22:24:25.352519Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.352532Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.355631Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5512420eb470d1ce","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-19T22:24:25.355692Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.355712Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.427429Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.428290Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:24:25.447984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:32950","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:24:25.491427Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(6130034673728934350 12593026477526642892 16449250771884659557)"}
	{"level":"info","ts":"2025-09-19T22:24:25.491593Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.491634Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:24:25.493734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:32956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:25.530775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:32980","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:24:25.607668Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"e4477a6cd7815365","bytes":946167,"size":"946 kB","took":"30.009579431s"}
	{"level":"info","ts":"2025-09-19T22:24:29.797825Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:31.923615Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:35.871798Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:53.749925Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:55.314881Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"5512420eb470d1ce","bytes":1356311,"size":"1.4 MB","took":"30.015547589s"}
	{"level":"info","ts":"2025-09-19T22:33:30.750666Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1558}
	{"level":"info","ts":"2025-09-19T22:33:30.775074Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1558,"took":"23.935678ms","hash":623549535,"current-db-size-bytes":4292608,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":2306048,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-09-19T22:33:30.775132Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":623549535,"revision":1558,"compact-revision":-1}
	
	
	==> kernel <==
	 22:37:25 up  1:19,  0 users,  load average: 0.60, 0.58, 0.70
	Linux ha-326307 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [365cc00c2e009eeed7e71d1202a4d406c12f0d9faee38762ba691eb2d7c71f89] <==
	I0919 22:36:40.991246       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:36:50.998290       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:36:50.998332       1 main.go:301] handling current node
	I0919 22:36:50.998351       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:36:50.998359       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:36:50.998554       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:36:50.998568       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:37:00.996278       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:37:00.996316       1 main.go:301] handling current node
	I0919 22:37:00.996331       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:37:00.996336       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:37:00.996584       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:37:00.996603       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:37:10.992294       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:37:10.992334       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:37:10.992571       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:37:10.992589       1 main.go:301] handling current node
	I0919 22:37:10.992605       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:37:10.992614       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:37:20.990243       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:37:20.990316       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:37:20.990527       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:37:20.990541       1 main.go:301] handling current node
	I0919 22:37:20.990553       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:37:20.990557       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e80d65e3c7c18da87f7fa003e39382f5a4285ba4782fc295197421c6b882a161] <==
	I0919 22:28:24.938045       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:27.132243       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:46.201118       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:52.628026       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:52.147734       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:53.858237       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:15.996526       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:22.110278       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:33:31.733595       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:33:36.316232       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:33:41.440724       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:43.430235       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:04.843923       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:47.576277       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:36:07.778568       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:37:07.288814       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:37:22.531524       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43412: use of closed network connection
	E0919 22:37:22.776721       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43434: use of closed network connection
	E0919 22:37:22.970082       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43448: use of closed network connection
	E0919 22:37:23.110093       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43464: use of closed network connection
	E0919 22:37:23.308629       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43484: use of closed network connection
	E0919 22:37:23.494833       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43500: use of closed network connection
	E0919 22:37:23.634448       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43520: use of closed network connection
	E0919 22:37:23.803885       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43532: use of closed network connection
	E0919 22:37:23.968210       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43546: use of closed network connection
	
	
	==> kube-controller-manager [05ab0247624a7f8ffa6bc948e3abc3adc49911c297291eb0a6dd42e3df39f4cd] <==
	I0919 22:23:38.744532       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:23:38.744726       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0919 22:23:38.744739       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0919 22:23:38.744729       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:23:38.744737       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:23:38.744759       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 22:23:38.745195       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 22:23:38.745255       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:23:38.746448       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:23:38.748706       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 22:23:38.750017       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.750086       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:23:38.751270       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 22:23:38.760899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 22:23:38.760926       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 22:23:38.760971       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 22:23:38.765332       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.771790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:08.307746       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m02\" does not exist"
	I0919 22:24:08.319829       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:24:08.699971       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m02"
	E0919 22:24:31.036531       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8ztpb failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8ztpb\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:24:31.706808       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m03\" does not exist"
	I0919 22:24:31.736561       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:24:33.715916       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m03"
	
	
	==> kube-proxy [bd9e41958ffbbb27dab3d180a56fb27df36a1a1896db3a6f322a8aabcda57677] <==
	I0919 22:23:40.183862       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:23:40.251957       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:23:40.353105       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:23:40.353291       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:23:40.353503       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:23:40.383440       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:23:40.383522       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:23:40.391534       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:23:40.391999       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:23:40.392045       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:23:40.394189       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:23:40.394304       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:23:40.394470       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:23:40.394480       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:23:40.394266       1 config.go:200] "Starting service config controller"
	I0919 22:23:40.394506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:23:40.394279       1 config.go:309] "Starting node config controller"
	I0919 22:23:40.394533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:23:40.394540       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:23:40.494617       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:23:40.494643       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:23:40.494649       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [456a0c3cbf5ce028b9cbac658728c1fee13ad8e2659bfa0c625cd685d711c708] <==
	E0919 22:23:33.115040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:23:33.129278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:23:33.194774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:23:33.337699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0919 22:23:35.055089       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:24:08.346116       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:08.346301       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:08.365410       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-78xs2" node="ha-326307-m02"
	E0919 22:24:08.365600       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="kube-system/kindnet-78xs2"
	E0919 22:24:08.379248       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"kindnet-78xs2\" not found" pod="kube-system/kindnet-78xs2"
	E0919 22:24:10.002296       1 schedule_one.go:975] "Scheduler cache AssumePod failed" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:10.002334       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	I0919 22:24:10.002368       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:31.751287       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-pnj9r" node="ha-326307-m03"
	E0919 22:24:31.751375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-pnj9r"
	E0919 22:24:31.887089       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:31.887576       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 173e48ec-ef56-4824-9f55-a04b199b7943(kube-system/kindnet-qxwpq) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	E0919 22:24:31.887605       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	I0919 22:24:31.888969       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:35.828083       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-nzzlb" node="ha-326307-m03"
	E0919 22:24:35.828187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-nzzlb"
	E0919 22:24:35.839864       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	E0919 22:24:35.839940       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ba4fd407-2e93-4324-ab2d-4f192d79fdf5(kube-system/kindnet-dmxl8) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	E0919 22:24:35.839964       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	I0919 22:24:35.841757       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	
	
	==> kubelet <==
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638035    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-cni-cfg\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638087    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-xtables-lock\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638115    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70be5fcc-7ab6-4eb1-870d-988fee1a01bb-kube-proxy\") pod \"kube-proxy-8kxtv\" (UID: \"70be5fcc-7ab6-4eb1-870d-988fee1a01bb\") " pod="kube-system/kube-proxy-8kxtv"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140870    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64376c4d-1b82-490d-887d-7f628b134014-config-volume\") pod \"coredns-66bc5c9577-wqvzd\" (UID: \"64376c4d-1b82-490d-887d-7f628b134014\") " pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140945    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d073e38-b63e-494d-bda0-3dde372a950b-config-volume\") pod \"coredns-66bc5c9577-9j5pw\" (UID: \"7d073e38-b63e-494d-bda0-3dde372a950b\") " pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140976    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkhk\" (UniqueName: \"kubernetes.io/projected/64376c4d-1b82-490d-887d-7f628b134014-kube-api-access-8tkhk\") pod \"coredns-66bc5c9577-wqvzd\" (UID: \"64376c4d-1b82-490d-887d-7f628b134014\") " pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.141004    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gmbw\" (UniqueName: \"kubernetes.io/projected/7d073e38-b63e-494d-bda0-3dde372a950b-kube-api-access-8gmbw\") pod \"coredns-66bc5c9577-9j5pw\" (UID: \"7d073e38-b63e-494d-bda0-3dde372a950b\") " pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319752    1670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\""
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319858    1670 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\"" pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319884    1670 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\"" pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319966    1670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wqvzd_kube-system(64376c4d-1b82-490d-887d-7f628b134014)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wqvzd_kube-system(64376c4d-1b82-490d-887d-7f628b134014)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\\\": failed to find network info for sandbox \\\"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\\\"\"" pod="kube-system/coredns-66bc5c9577-wqvzd" podUID="64376c4d-1b82-490d-887d-7f628b134014"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332044    1670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\""
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332130    1670 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\"" pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332205    1670 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\"" pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332288    1670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9j5pw_kube-system(7d073e38-b63e-494d-bda0-3dde372a950b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9j5pw_kube-system(7d073e38-b63e-494d-bda0-3dde372a950b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\\\": failed to find network info for sandbox \\\"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\\\"\"" pod="kube-system/coredns-66bc5c9577-9j5pw" podUID="7d073e38-b63e-494d-bda0-3dde372a950b"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.543914    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cafe04c6-2dce-4b93-b6d1-205efc39b360-tmp\") pod \"storage-provisioner\" (UID: \"cafe04c6-2dce-4b93-b6d1-205efc39b360\") " pod="kube-system/storage-provisioner"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.543969    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47vqf\" (UniqueName: \"kubernetes.io/projected/cafe04c6-2dce-4b93-b6d1-205efc39b360-kube-api-access-47vqf\") pod \"storage-provisioner\" (UID: \"cafe04c6-2dce-4b93-b6d1-205efc39b360\") " pod="kube-system/storage-provisioner"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.684901    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gxnzs" podStartSLOduration=1.68487896 podStartE2EDuration="1.68487896s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:40.684630982 +0000 UTC m=+6.151051272" watchObservedRunningTime="2025-09-19 22:23:40.68487896 +0000 UTC m=+6.151299251"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.685802    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8kxtv" podStartSLOduration=1.685781067 podStartE2EDuration="1.685781067s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:40.670987608 +0000 UTC m=+6.137407898" watchObservedRunningTime="2025-09-19 22:23:40.685781067 +0000 UTC m=+6.152201360"
	Sep 19 22:23:41 ha-326307 kubelet[1670]: I0919 22:23:41.676063    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.676036489 podStartE2EDuration="1.676036489s" podCreationTimestamp="2025-09-19 22:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:41.675998333 +0000 UTC m=+7.142418624" watchObservedRunningTime="2025-09-19 22:23:41.676036489 +0000 UTC m=+7.142456778"
	Sep 19 22:23:45 ha-326307 kubelet[1670]: I0919 22:23:45.164667    1670 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:23:45 ha-326307 kubelet[1670]: I0919 22:23:45.165981    1670 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:23:52 ha-326307 kubelet[1670]: I0919 22:23:52.703916    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wqvzd" podStartSLOduration=13.703896267 podStartE2EDuration="13.703896267s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:52.703429297 +0000 UTC m=+18.169849612" watchObservedRunningTime="2025-09-19 22:23:52.703896267 +0000 UTC m=+18.170316558"
	Sep 19 22:23:56 ha-326307 kubelet[1670]: I0919 22:23:56.724956    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9j5pw" podStartSLOduration=17.724936721 podStartE2EDuration="17.724936721s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:56.724564031 +0000 UTC m=+22.190984322" watchObservedRunningTime="2025-09-19 22:23:56.724936721 +0000 UTC m=+22.191357012"
	Sep 19 22:25:18 ha-326307 kubelet[1670]: I0919 22:25:18.904730    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt2kb\" (UniqueName: \"kubernetes.io/projected/7533a5f9-7c6d-4476-9e03-eb8abe0aadbc-kube-api-access-rt2kb\") pod \"busybox-7b57f96db7-m8swj\" (UID: \"7533a5f9-7c6d-4476-9e03-eb8abe0aadbc\") " pod="default/busybox-7b57f96db7-m8swj"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-326307 -n ha-326307
helpers_test.go:269: (dbg) Run:  kubectl --context ha-326307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-jdczt
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DeployApp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-326307 describe pod busybox-7b57f96db7-jdczt
helpers_test.go:290: (dbg) kubectl --context ha-326307 describe pod busybox-7b57f96db7-jdczt:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-jdczt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ha-326307-m03/192.168.49.4
	Start Time:       Fri, 19 Sep 2025 22:25:18 +0000
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwg8l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xwg8l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                  Age                From               Message
	  ----     ------                  ----               ----               -------
	  Warning  FailedScheduling        12m                default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-jdczt": pod busybox-7b57f96db7-jdczt is already assigned to node "ha-326307-m03"
	  Warning  FailedScheduling        12m                default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-jdczt": pod busybox-7b57f96db7-jdczt is already assigned to node "ha-326307-m03"
	  Normal   Scheduled               12m                default-scheduler  Successfully assigned default/busybox-7b57f96db7-jdczt to ha-326307-m03
	  Warning  FailedCreatePodSandBox  12m                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f949ef20a496c1ef4510b9586bfdf0aa02ea1ca9948f762b5576ef36acab80c9": failed to find network info for sandbox "f949ef20a496c1ef4510b9586bfdf0aa02ea1ca9948f762b5576ef36acab80c9"
	  Warning  FailedCreatePodSandBox  11m                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "306b20c8e47aaeb0b6ae068e406157020ceddab45da2b4f2ab7d80c0e47f4391": failed to find network info for sandbox "306b20c8e47aaeb0b6ae068e406157020ceddab45da2b4f2ab7d80c0e47f4391"
	  Warning  FailedCreatePodSandBox  11m                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e6c50e24733dc1514dd610f1c51f99bc1a57d10929036ae887a87c4b187b9ac1": failed to find network info for sandbox "e6c50e24733dc1514dd610f1c51f99bc1a57d10929036ae887a87c4b187b9ac1"
	  Warning  FailedCreatePodSandBox  11m                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9d4e7715d1c071862624112264db649229347a018044c9075df60fb9940c8e8a": failed to find network info for sandbox "9d4e7715d1c071862624112264db649229347a018044c9075df60fb9940c8e8a"
	  Warning  FailedCreatePodSandBox  11m                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d188384b76e1cf43ce05a368351c54023e455d3fd0fddf79dc0d717558b93ee6": failed to find network info for sandbox "d188384b76e1cf43ce05a368351c54023e455d3fd0fddf79dc0d717558b93ee6"
	  Warning  FailedCreatePodSandBox  11m                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "726dbde7347664ddd373a329867d125e92a7173ca43b01448ce154579a81a0bb": failed to find network info for sandbox "726dbde7347664ddd373a329867d125e92a7173ca43b01448ce154579a81a0bb"
	  Warning  FailedCreatePodSandBox  10m                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e45f823e38e8e10ed14077a52e1750763b0c366d8a775bbf53f656c10861f185": failed to find network info for sandbox "e45f823e38e8e10ed14077a52e1750763b0c366d8a775bbf53f656c10861f185"
	  Warning  FailedCreatePodSandBox  10m                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "92ae39dcfd89289f5a5fdc5ae0c23196a91b58214916aead7f95620a2697c009": failed to find network info for sandbox "92ae39dcfd89289f5a5fdc5ae0c23196a91b58214916aead7f95620a2697c009"
	  Warning  FailedCreatePodSandBox  10m                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "98a14fefd702a6a8ff4d95d8bebac62053e439ee134d840d76e175ba4e8c45d6": failed to find network info for sandbox "98a14fefd702a6a8ff4d95d8bebac62053e439ee134d840d76e175ba4e8c45d6"
	  Warning  FailedCreatePodSandBox  2m (x39 over 10m)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "68cb483b1808e73ea325cca055c7a7f1bd2a591a81aa8b2bcb8cb96560fd08b2": failed to find network info for sandbox "68cb483b1808e73ea325cca055c7a7f1bd2a591a81aa8b2bcb8cb96560fd08b2"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (727.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (3.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (134.7895ms)

                                                
                                                
** stderr ** 
	error: Internal error occurred: unable to upgrade connection: container not found ("busybox")

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-7b57f96db7-jdczt could not resolve 'host.minikube.internal': exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- sh -c "ping -c 1 192.168.49.1"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-326307
helpers_test.go:243: (dbg) docker inspect ha-326307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	        "Created": "2025-09-19T22:23:18.619000062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 69921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:23:18.670514121Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hosts",
	        "LogPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3-json.log",
	        "Name": "/ha-326307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-326307:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-326307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	                "LowerDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-326307",
	                "Source": "/var/lib/docker/volumes/ha-326307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-326307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-326307",
	                "name.minikube.sigs.k8s.io": "ha-326307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b9c61cd0152986e2b265b3cf0a7628b1c049e495ce30493b8e54f6b9446115f",
	            "SandboxKey": "/var/run/docker/netns/8b9c61cd0152",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-326307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:80:09:d2:65:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "465af21e2d8d112a34f14f7c3ab89eac4e6e57f582ff8e32b514381f55dd085e",
	                    "EndpointID": "f35735061c65841c2c1ba7f2859db25885582588fa8f2d14e3a015320f6c3fc4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-326307",
	                        "5e0f1fe86b08"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-326307 -n ha-326307
helpers_test.go:252: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-326307 logs -n 25: (1.265900095s)
helpers_test.go:260: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- nslookup kubernetes.io                                              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │                     │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- nslookup kubernetes.io                                              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- nslookup kubernetes.io                                              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- nslookup kubernetes.default                                         │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │                     │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- nslookup kubernetes.default                                         │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- nslookup kubernetes.default                                         │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- nslookup kubernetes.default.svc.cluster.local                       │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │                     │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- nslookup kubernetes.default.svc.cluster.local                       │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- nslookup kubernetes.default.svc.cluster.local                       │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │                     │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- sh -c ping -c 1 192.168.49.1                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- sh -c ping -c 1 192.168.49.1                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:23:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:23:13.527478   69358 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:23:13.527574   69358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:13.527579   69358 out.go:374] Setting ErrFile to fd 2...
	I0919 22:23:13.527586   69358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:13.527823   69358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:23:13.528355   69358 out.go:368] Setting JSON to false
	I0919 22:23:13.529260   69358 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3938,"bootTime":1758316656,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:23:13.529345   69358 start.go:140] virtualization: kvm guest
	I0919 22:23:13.531661   69358 out.go:179] * [ha-326307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:23:13.533198   69358 notify.go:220] Checking for updates...
	I0919 22:23:13.533231   69358 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:23:13.534827   69358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:23:13.536340   69358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:23:13.537773   69358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 22:23:13.539372   69358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:23:13.541189   69358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:23:13.542697   69358 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:23:13.568228   69358 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:23:13.568380   69358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:13.622546   69358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:23:13.612893654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:13.622646   69358 docker.go:318] overlay module found
	I0919 22:23:13.624668   69358 out.go:179] * Using the docker driver based on user configuration
	I0919 22:23:13.626116   69358 start.go:304] selected driver: docker
	I0919 22:23:13.626134   69358 start.go:918] validating driver "docker" against <nil>
	I0919 22:23:13.626147   69358 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:23:13.626725   69358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:13.684385   69358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:23:13.672811393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:13.684569   69358 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:23:13.684775   69358 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:23:13.686618   69358 out.go:179] * Using Docker driver with root privileges
	I0919 22:23:13.687924   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:13.688000   69358 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:23:13.688014   69358 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:23:13.688089   69358 start.go:348] cluster config:
	{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m
0s}
	I0919 22:23:13.689601   69358 out.go:179] * Starting "ha-326307" primary control-plane node in "ha-326307" cluster
	I0919 22:23:13.691305   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:23:13.692823   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:23:13.694304   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:13.694378   69358 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 22:23:13.694398   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:23:13.694426   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:23:13.694515   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:23:13.694533   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:23:13.694981   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:13.695014   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json: {Name:mk9e3af266bcfbabd18624d7d22535c6f1841e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:13.716737   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:23:13.716759   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:23:13.716776   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:23:13.716797   69358 start.go:360] acquireMachinesLock for ha-326307: {Name:mk42b79b90944aab63c8b37c2f94e04ca1ebec1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:23:13.716893   69358 start.go:364] duration metric: took 80.537µs to acquireMachinesLock for "ha-326307"
	I0919 22:23:13.716915   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:13.716974   69358 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:23:13.719062   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:23:13.719317   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:23:13.719352   69358 client.go:168] LocalClient.Create starting
	I0919 22:23:13.719447   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:23:13.719502   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:13.719517   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:13.719580   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:23:13.719600   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:13.719610   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:13.719933   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:23:13.737609   69358 cli_runner.go:211] docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:23:13.737699   69358 network_create.go:284] running [docker network inspect ha-326307] to gather additional debugging logs...
	I0919 22:23:13.737725   69358 cli_runner.go:164] Run: docker network inspect ha-326307
	W0919 22:23:13.755400   69358 cli_runner.go:211] docker network inspect ha-326307 returned with exit code 1
	I0919 22:23:13.755437   69358 network_create.go:287] error running [docker network inspect ha-326307]: docker network inspect ha-326307: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-326307 not found
	I0919 22:23:13.755455   69358 network_create.go:289] output of [docker network inspect ha-326307]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-326307 not found
	
	** /stderr **
	I0919 22:23:13.755563   69358 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:13.774541   69358 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018eb270}
	I0919 22:23:13.774578   69358 network_create.go:124] attempt to create docker network ha-326307 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:23:13.774619   69358 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-326307 ha-326307
	I0919 22:23:13.834699   69358 network_create.go:108] docker network ha-326307 192.168.49.0/24 created
	I0919 22:23:13.834730   69358 kic.go:121] calculated static IP "192.168.49.2" for the "ha-326307" container
	I0919 22:23:13.834799   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:23:13.852316   69358 cli_runner.go:164] Run: docker volume create ha-326307 --label name.minikube.sigs.k8s.io=ha-326307 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:23:13.872969   69358 oci.go:103] Successfully created a docker volume ha-326307
	I0919 22:23:13.873115   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307 --entrypoint /usr/bin/test -v ha-326307:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:23:14.277718   69358 oci.go:107] Successfully prepared a docker volume ha-326307
	I0919 22:23:14.277762   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:14.277789   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:23:14.277852   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:23:18.547851   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.269954037s)
	I0919 22:23:18.547886   69358 kic.go:203] duration metric: took 4.270092787s to extract preloaded images to volume ...
	W0919 22:23:18.548002   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:23:18.548044   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:23:18.548091   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:23:18.602395   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307 --name ha-326307 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307 --network ha-326307 --ip 192.168.49.2 --volume ha-326307:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:23:18.902433   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Running}}
	I0919 22:23:18.923488   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:18.945324   69358 cli_runner.go:164] Run: docker exec ha-326307 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:23:18.998198   69358 oci.go:144] the created container "ha-326307" has a running status.
	I0919 22:23:18.998254   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa...
	I0919 22:23:19.305578   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:23:19.305639   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:23:19.338987   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:19.361057   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:23:19.361077   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:23:19.423644   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:19.446710   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:23:19.446815   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.468914   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.469178   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.469194   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:23:19.609654   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:23:19.609685   69358 ubuntu.go:182] provisioning hostname "ha-326307"
	I0919 22:23:19.609806   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.631352   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.631769   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.631790   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307 && echo "ha-326307" | sudo tee /etc/hostname
	I0919 22:23:19.783770   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:23:19.783868   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.802757   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.802967   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.802990   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:23:19.942778   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:23:19.942811   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:23:19.942925   69358 ubuntu.go:190] setting up certificates
	I0919 22:23:19.942949   69358 provision.go:84] configureAuth start
	I0919 22:23:19.943010   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:19.963444   69358 provision.go:143] copyHostCerts
	I0919 22:23:19.963491   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:19.963531   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:23:19.963541   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:19.963629   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:23:19.963778   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:19.963807   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:23:19.963811   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:19.963862   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:23:19.963997   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:19.964030   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:23:19.964040   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:19.964080   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:23:19.964187   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307 san=[127.0.0.1 192.168.49.2 ha-326307 localhost minikube]
	I0919 22:23:20.747311   69358 provision.go:177] copyRemoteCerts
	I0919 22:23:20.747377   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:23:20.747410   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:20.766468   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:20.866991   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:23:20.867057   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:23:20.897799   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:23:20.897858   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:23:20.925953   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:23:20.926026   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:23:20.954845   69358 provision.go:87] duration metric: took 1.011880735s to configureAuth
	I0919 22:23:20.954872   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:23:20.955074   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:20.955089   69358 machine.go:96] duration metric: took 1.508356629s to provisionDockerMachine
	I0919 22:23:20.955096   69358 client.go:171] duration metric: took 7.235738314s to LocalClient.Create
	I0919 22:23:20.955122   69358 start.go:167] duration metric: took 7.235806728s to libmachine.API.Create "ha-326307"
	I0919 22:23:20.955128   69358 start.go:293] postStartSetup for "ha-326307" (driver="docker")
	I0919 22:23:20.955136   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:23:20.955224   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:23:20.955259   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:20.975767   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.077921   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:23:21.081820   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:23:21.081872   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:23:21.081881   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:23:21.081888   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:23:21.081901   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:23:21.081973   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:23:21.082057   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:23:21.082071   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:23:21.082204   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:23:21.092245   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:21.123732   69358 start.go:296] duration metric: took 168.590139ms for postStartSetup
	I0919 22:23:21.124127   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:21.143109   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:21.143414   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:23:21.143466   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.162970   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.258062   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:23:21.263437   69358 start.go:128] duration metric: took 7.546444684s to createHost
	I0919 22:23:21.263491   69358 start.go:83] releasing machines lock for "ha-326307", held for 7.546570423s
	I0919 22:23:21.263561   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:21.282251   69358 ssh_runner.go:195] Run: cat /version.json
	I0919 22:23:21.282309   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.282391   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:23:21.282539   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.302076   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.302858   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.477003   69358 ssh_runner.go:195] Run: systemctl --version
	I0919 22:23:21.481946   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:23:21.486736   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:23:21.519470   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:23:21.519573   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:23:21.549703   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:23:21.549736   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:23:21.549772   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:23:21.549813   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:23:21.563897   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:23:21.577043   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:23:21.577104   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:23:21.591898   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:23:21.607905   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:23:21.677531   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:23:21.749223   69358 docker.go:234] disabling docker service ...
	I0919 22:23:21.749348   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:23:21.771648   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:23:21.786268   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:23:21.864247   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:23:21.930620   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:23:21.943680   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:23:21.963319   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:23:21.977473   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:23:21.989630   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:23:21.989705   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:23:22.001778   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:22.013415   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:23:22.024683   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:22.036042   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:23:22.047238   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:23:22.060239   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:23:22.074324   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:23:22.087081   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:23:22.099883   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:23:22.110348   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:22.180253   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:23:22.295748   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:23:22.295832   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:23:22.300535   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:23:22.300597   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:23:22.304676   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:23:22.344790   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:23:22.344850   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:22.371338   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:22.400934   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:23:22.402669   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:22.421952   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:23:22.426523   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:22.442415   69358 kubeadm.go:875] updating cluster {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:23:22.442712   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:22.442823   69358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:23:22.482684   69358 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:23:22.482710   69358 containerd.go:534] Images already preloaded, skipping extraction
	I0919 22:23:22.482762   69358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:23:22.518500   69358 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:23:22.518526   69358 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:23:22.518533   69358 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0919 22:23:22.518616   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:23:22.518668   69358 ssh_runner.go:195] Run: sudo crictl info
	I0919 22:23:22.554956   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:22.554993   69358 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:23:22.555004   69358 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:23:22.555029   69358 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-326307 NodeName:ha-326307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:23:22.555176   69358 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-326307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:23:22.555209   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:23:22.555273   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:23:22.568901   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:23:22.569038   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:23:22.569091   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:23:22.580223   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:23:22.580317   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:23:22.591268   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 22:23:22.612688   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:23:22.636770   69358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0919 22:23:22.658657   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:23:22.681384   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:23:22.685531   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:22.698340   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:22.769217   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:23:22.792280   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.2
	I0919 22:23:22.792300   69358 certs.go:194] generating shared ca certs ...
	I0919 22:23:22.792315   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.792509   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:23:22.792553   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:23:22.792563   69358 certs.go:256] generating profile certs ...
	I0919 22:23:22.792630   69358 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:23:22.792643   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt with IP's: []
	I0919 22:23:22.975725   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt ...
	I0919 22:23:22.975759   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt: {Name:mk32bca88dd6748516774b56251f96e4fc38a69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.975973   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key ...
	I0919 22:23:22.975990   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key: {Name:mkc0e836c004e527dbd2787dc00463a0715cf8a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.976108   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226
	I0919 22:23:22.976125   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:23:23.460427   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 ...
	I0919 22:23:23.460460   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226: {Name:mk98859e0e43a6d4b4da591dc89695908954cc81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.460672   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226 ...
	I0919 22:23:23.460693   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226: {Name:mk3473c1668aec72ec5a5598645b70e29415cdd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.460941   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:23:23.461078   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:23:23.461207   69358 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:23:23.461233   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt with IP's: []
	I0919 22:23:23.489621   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt ...
	I0919 22:23:23.489652   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt: {Name:mk06f3b4cfde33781bd7076ead00f94525257452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.489837   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key ...
	I0919 22:23:23.489860   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key: {Name:mk632a617a99ac85bf5a9b022d1173caf8e7b208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.489978   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:23:23.490003   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:23:23.490018   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:23:23.490034   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:23:23.490051   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:23:23.490069   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:23:23.490087   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:23:23.490100   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:23:23.490185   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:23:23.490228   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:23:23.490238   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:23:23.490273   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:23:23.490304   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:23:23.490333   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:23:23.490390   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:23.490435   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.490455   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.490497   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.491033   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:23:23.517815   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:23:23.544857   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:23:23.571386   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:23:23.600966   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:23:23.629855   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:23:23.657907   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:23:23.685564   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:23:23.713503   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:23:23.745344   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:23:23.774311   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:23:23.807603   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:23:23.832523   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:23:23.839649   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:23:23.851364   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.856325   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.856396   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.864469   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:23:23.876649   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:23:23.888129   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.892889   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.892949   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.901167   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:23:23.912487   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:23:23.924831   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.929289   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.929357   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.937110   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:23:23.948517   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:23:23.952948   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:23:23.953011   69358 kubeadm.go:392] StartCluster: {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:23.953080   69358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 22:23:23.953122   69358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:23:23.991138   69358 cri.go:89] found id: ""
	I0919 22:23:23.991247   69358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:23:24.003111   69358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:23:24.013643   69358 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:23:24.013714   69358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:23:24.024557   69358 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:23:24.024576   69358 kubeadm.go:157] found existing configuration files:
	
	I0919 22:23:24.024633   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:23:24.035252   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:23:24.035322   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:23:24.045590   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:23:24.056529   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:23:24.056590   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:23:24.066716   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:23:24.077570   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:23:24.077653   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:23:24.088177   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:23:24.098372   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:23:24.098426   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:23:24.108265   69358 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:23:24.149643   69358 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:23:24.149730   69358 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:23:24.166048   69358 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:23:24.166117   69358 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:23:24.166172   69358 kubeadm.go:310] OS: Linux
	I0919 22:23:24.166213   69358 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:23:24.166275   69358 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:23:24.166357   69358 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:23:24.166446   69358 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:23:24.166536   69358 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:23:24.166608   69358 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:23:24.166683   69358 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:23:24.166760   69358 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:23:24.230351   69358 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:23:24.230487   69358 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:23:24.230602   69358 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:23:24.238806   69358 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:23:24.243498   69358 out.go:252]   - Generating certificates and keys ...
	I0919 22:23:24.243610   69358 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:23:24.243715   69358 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:23:24.335199   69358 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:23:24.361175   69358 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:23:24.769077   69358 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:23:25.053293   69358 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:23:25.392067   69358 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:23:25.392251   69358 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-326307 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:23:25.629558   69358 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:23:25.629706   69358 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-326307 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:23:26.141828   69358 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:23:26.343650   69358 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:23:26.737207   69358 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:23:26.737292   69358 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:23:27.020543   69358 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:23:27.208963   69358 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:23:27.382044   69358 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:23:27.660395   69358 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:23:27.867964   69358 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:23:27.868475   69358 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:23:27.870857   69358 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:23:27.873408   69358 out.go:252]   - Booting up control plane ...
	I0919 22:23:27.873545   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:23:27.873665   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:23:27.873811   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:23:27.884709   69358 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:23:27.884874   69358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:23:27.892815   69358 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:23:27.893043   69358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:23:27.893108   69358 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:23:27.981591   69358 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:23:27.981772   69358 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:23:29.484085   69358 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501867716s
	I0919 22:23:29.488057   69358 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:23:29.488269   69358 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:23:29.488401   69358 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:23:29.488636   69358 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:23:31.058022   69358 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.569932465s
	I0919 22:23:31.762139   69358 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.27419796s
	I0919 22:23:33.991284   69358 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.503282233s
	I0919 22:23:34.005767   69358 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:23:34.017935   69358 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:23:34.032336   69358 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:23:34.032534   69358 kubeadm.go:310] [mark-control-plane] Marking the node ha-326307 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:23:34.042496   69358 kubeadm.go:310] [bootstrap-token] Using token: ym5hq4.pw1tvtip1io4ljbf
	I0919 22:23:34.044381   69358 out.go:252]   - Configuring RBAC rules ...
	I0919 22:23:34.044558   69358 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:23:34.048649   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:23:34.057509   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:23:34.061297   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:23:34.064926   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:23:34.069534   69358 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:23:34.399239   69358 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:23:34.818126   69358 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:23:35.398001   69358 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:23:35.398907   69358 kubeadm.go:310] 
	I0919 22:23:35.399007   69358 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:23:35.399035   69358 kubeadm.go:310] 
	I0919 22:23:35.399120   69358 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:23:35.399149   69358 kubeadm.go:310] 
	I0919 22:23:35.399207   69358 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:23:35.399301   69358 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:23:35.399350   69358 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:23:35.399356   69358 kubeadm.go:310] 
	I0919 22:23:35.399402   69358 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:23:35.399408   69358 kubeadm.go:310] 
	I0919 22:23:35.399470   69358 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:23:35.399481   69358 kubeadm.go:310] 
	I0919 22:23:35.399554   69358 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:23:35.399644   69358 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:23:35.399706   69358 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:23:35.399712   69358 kubeadm.go:310] 
	I0919 22:23:35.399803   69358 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:23:35.399888   69358 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:23:35.399892   69358 kubeadm.go:310] 
	I0919 22:23:35.399971   69358 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ym5hq4.pw1tvtip1io4ljbf \
	I0919 22:23:35.400068   69358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 \
	I0919 22:23:35.400089   69358 kubeadm.go:310] 	--control-plane 
	I0919 22:23:35.400093   69358 kubeadm.go:310] 
	I0919 22:23:35.400204   69358 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:23:35.400217   69358 kubeadm.go:310] 
	I0919 22:23:35.400285   69358 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ym5hq4.pw1tvtip1io4ljbf \
	I0919 22:23:35.400382   69358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 
	I0919 22:23:35.403119   69358 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:23:35.403274   69358 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:23:35.403305   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:35.403317   69358 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:23:35.407302   69358 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:23:35.409983   69358 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:23:35.415011   69358 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:23:35.415039   69358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:23:35.436210   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:23:35.679694   69358 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:23:35.679756   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:35.679779   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307 minikube.k8s.io/updated_at=2025_09_19T22_23_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=true
	I0919 22:23:35.787076   69358 ops.go:34] apiserver oom_adj: -16
	I0919 22:23:35.787237   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:36.287327   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:36.787300   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:37.287415   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:37.788066   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:38.287401   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:38.787731   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.288028   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.788301   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.864456   69358 kubeadm.go:1105] duration metric: took 4.184765822s to wait for elevateKubeSystemPrivileges
	I0919 22:23:39.864500   69358 kubeadm.go:394] duration metric: took 15.911493151s to StartCluster
	I0919 22:23:39.864524   69358 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:39.864601   69358 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:23:39.865911   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:39.866255   69358 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:39.866275   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:23:39.866288   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:23:39.866297   69358 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:23:39.866377   69358 addons.go:69] Setting storage-provisioner=true in profile "ha-326307"
	I0919 22:23:39.866398   69358 addons.go:238] Setting addon storage-provisioner=true in "ha-326307"
	I0919 22:23:39.866400   69358 addons.go:69] Setting default-storageclass=true in profile "ha-326307"
	I0919 22:23:39.866428   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:39.866523   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:39.866434   69358 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-326307"
	I0919 22:23:39.866921   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.867012   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.892851   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:23:39.893863   69358 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:23:39.893944   69358 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:23:39.893953   69358 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:23:39.894002   69358 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:23:39.894061   69358 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:23:39.893888   69358 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:23:39.894642   69358 addons.go:238] Setting addon default-storageclass=true in "ha-326307"
	I0919 22:23:39.894691   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:39.895196   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.895724   69358 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:23:39.897293   69358 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:23:39.897315   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:23:39.897386   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:39.923915   69358 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:23:39.923939   69358 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:23:39.924001   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:39.926323   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:39.953300   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:39.968501   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:23:40.065441   69358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:23:40.083647   69358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:23:40.190461   69358 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:23:40.433561   69358 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:23:40.435567   69358 addons.go:514] duration metric: took 569.25898ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:23:40.435633   69358 start.go:246] waiting for cluster config update ...
	I0919 22:23:40.435651   69358 start.go:255] writing updated cluster config ...
	I0919 22:23:40.437510   69358 out.go:203] 
	I0919 22:23:40.439070   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:40.439141   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:40.441238   69358 out.go:179] * Starting "ha-326307-m02" control-plane node in "ha-326307" cluster
	I0919 22:23:40.443382   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:23:40.445749   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:23:40.447079   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:40.447132   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:23:40.447229   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:23:40.447308   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:23:40.447326   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:23:40.447427   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:40.470325   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:23:40.470347   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:23:40.470366   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:23:40.470391   69358 start.go:360] acquireMachinesLock for ha-326307-m02: {Name:mk4919a9b19250804b0f53d01bcd11efaf9a431f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:23:40.470518   69358 start.go:364] duration metric: took 88.309µs to acquireMachinesLock for "ha-326307-m02"
	I0919 22:23:40.470552   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:40.470618   69358 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:23:40.473495   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:23:40.473607   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:23:40.473631   69358 client.go:168] LocalClient.Create starting
	I0919 22:23:40.473689   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:23:40.473724   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:40.473734   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:40.473828   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:23:40.473853   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:40.473861   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:40.474095   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:40.493916   69358 network_create.go:77] Found existing network {name:ha-326307 subnet:0xc000ad7620 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:23:40.493972   69358 kic.go:121] calculated static IP "192.168.49.3" for the "ha-326307-m02" container
	I0919 22:23:40.494055   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:23:40.516112   69358 cli_runner.go:164] Run: docker volume create ha-326307-m02 --label name.minikube.sigs.k8s.io=ha-326307-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:23:40.537046   69358 oci.go:103] Successfully created a docker volume ha-326307-m02
	I0919 22:23:40.537137   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m02 --entrypoint /usr/bin/test -v ha-326307-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:23:40.991997   69358 oci.go:107] Successfully prepared a docker volume ha-326307-m02
	I0919 22:23:40.992038   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:40.992061   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:23:40.992121   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:23:45.362629   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.370467998s)
	I0919 22:23:45.362666   69358 kic.go:203] duration metric: took 4.370603938s to extract preloaded images to volume ...
	W0919 22:23:45.362777   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:23:45.362811   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:23:45.362846   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:23:45.417833   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307-m02 --name ha-326307-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307-m02 --network ha-326307 --ip 192.168.49.3 --volume ha-326307-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:23:45.744363   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Running}}
	I0919 22:23:45.768456   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:45.789293   69358 cli_runner.go:164] Run: docker exec ha-326307-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:23:45.846760   69358 oci.go:144] the created container "ha-326307-m02" has a running status.
	I0919 22:23:45.846794   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa...
	I0919 22:23:46.005236   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:23:46.005288   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:23:46.042640   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:46.067424   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:23:46.067455   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:23:46.132729   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:46.155854   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:23:46.155967   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.177181   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.177511   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.177533   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:23:46.320054   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:23:46.320089   69358 ubuntu.go:182] provisioning hostname "ha-326307-m02"
	I0919 22:23:46.320185   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.341740   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.341951   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.341965   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m02 && echo "ha-326307-m02" | sudo tee /etc/hostname
	I0919 22:23:46.497123   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:23:46.497234   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.520214   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.520436   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.520455   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:23:46.659417   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:23:46.659458   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:23:46.659492   69358 ubuntu.go:190] setting up certificates
	I0919 22:23:46.659505   69358 provision.go:84] configureAuth start
	I0919 22:23:46.659556   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:46.679498   69358 provision.go:143] copyHostCerts
	I0919 22:23:46.679551   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:46.679598   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:23:46.679605   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:46.679712   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:23:46.679851   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:46.679882   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:23:46.679893   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:46.679947   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:23:46.680043   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:46.680141   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:23:46.680185   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:46.680251   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:23:46.680367   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m02 san=[127.0.0.1 192.168.49.3 ha-326307-m02 localhost minikube]
	I0919 22:23:46.869190   69358 provision.go:177] copyRemoteCerts
	I0919 22:23:46.869251   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:23:46.869285   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.888798   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:46.988385   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:23:46.988452   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:23:47.018227   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:23:47.018299   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:23:47.046810   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:23:47.046866   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:23:47.074372   69358 provision.go:87] duration metric: took 414.855982ms to configureAuth
	I0919 22:23:47.074400   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:23:47.074581   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:47.074598   69358 machine.go:96] duration metric: took 918.712366ms to provisionDockerMachine
	I0919 22:23:47.074607   69358 client.go:171] duration metric: took 6.600969352s to LocalClient.Create
	I0919 22:23:47.074631   69358 start.go:167] duration metric: took 6.601023702s to libmachine.API.Create "ha-326307"
	I0919 22:23:47.074642   69358 start.go:293] postStartSetup for "ha-326307-m02" (driver="docker")
	I0919 22:23:47.074650   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:23:47.074721   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:23:47.074767   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.094538   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.195213   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:23:47.199088   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:23:47.199139   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:23:47.199181   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:23:47.199191   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:23:47.199215   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:23:47.199276   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:23:47.199378   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:23:47.199394   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:23:47.199502   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:23:47.209642   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:47.240945   69358 start.go:296] duration metric: took 166.288086ms for postStartSetup
	I0919 22:23:47.241383   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:47.261061   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:47.261460   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:23:47.261513   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.280359   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.374609   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:23:47.379255   69358 start.go:128] duration metric: took 6.908623332s to createHost
	I0919 22:23:47.379283   69358 start.go:83] releasing machines lock for "ha-326307-m02", held for 6.908753842s
	I0919 22:23:47.379346   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:47.400418   69358 out.go:179] * Found network options:
	I0919 22:23:47.401854   69358 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:23:47.403072   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:23:47.403133   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:23:47.403263   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:23:47.403266   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:23:47.403326   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.403332   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.423928   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.424218   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.597529   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:23:47.630263   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:23:47.630334   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:23:47.661706   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:23:47.661733   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:23:47.661772   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:23:47.661826   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:23:47.675485   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:23:47.687726   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:23:47.687780   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:23:47.701818   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:23:47.717912   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:23:47.789825   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:23:47.863188   69358 docker.go:234] disabling docker service ...
	I0919 22:23:47.863267   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:23:47.881757   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:23:47.893830   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:23:47.963004   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:23:48.034120   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:23:48.046843   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:23:48.065279   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:23:48.078269   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:23:48.089105   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:23:48.089186   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:23:48.099867   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:48.111076   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:23:48.122049   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:48.132648   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:23:48.142263   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:23:48.152876   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:23:48.163459   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:23:48.174096   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:23:48.183483   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:23:48.192780   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:48.261004   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:23:48.364434   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:23:48.364508   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:23:48.368726   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:23:48.368792   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:23:48.372683   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:23:48.409110   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:23:48.409200   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:48.433389   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:48.460529   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:23:48.462207   69358 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:23:48.464087   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:48.482217   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:23:48.486620   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:48.498806   69358 mustload.go:65] Loading cluster: ha-326307
	I0919 22:23:48.499032   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:48.499315   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:48.518576   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:48.518850   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.3
	I0919 22:23:48.518866   69358 certs.go:194] generating shared ca certs ...
	I0919 22:23:48.518885   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.519012   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:23:48.519082   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:23:48.519096   69358 certs.go:256] generating profile certs ...
	I0919 22:23:48.519222   69358 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:23:48.519259   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4
	I0919 22:23:48.519288   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:23:48.963393   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 ...
	I0919 22:23:48.963428   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4: {Name:mk381f64cc0991e3a6417e9586b9565eb7a8dbf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.963635   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4 ...
	I0919 22:23:48.963660   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4: {Name:mk4dbead0b9c36c7a3635520729a1eb2d4b33f13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.963762   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:23:48.963935   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:23:48.964103   69358 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:23:48.964120   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:23:48.964138   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:23:48.964166   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:23:48.964183   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:23:48.964200   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:23:48.964218   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:23:48.964234   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:23:48.964251   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:23:48.964313   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:23:48.964355   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:23:48.964366   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:23:48.964406   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:23:48.964438   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:23:48.964471   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:23:48.964528   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:48.964570   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:23:48.964592   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:23:48.964612   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:48.964731   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:48.983907   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:49.073692   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:23:49.078819   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:23:49.094234   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:23:49.099593   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:23:49.113663   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:23:49.117744   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:23:49.133048   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:23:49.136861   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:23:49.150734   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:23:49.154901   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:23:49.169388   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:23:49.173566   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:23:49.188070   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:23:49.215594   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:23:49.243561   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:23:49.271624   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:23:49.301814   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:23:49.332556   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:23:49.360723   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:23:49.388872   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:23:49.417316   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:23:49.448722   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:23:49.476877   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:23:49.504914   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:23:49.524969   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:23:49.544942   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:23:49.564506   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:23:49.584887   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:23:49.605725   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:23:49.625552   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:23:49.645811   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:23:49.652062   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:23:49.664544   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.668823   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.668889   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.676892   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:23:49.688737   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:23:49.699741   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.703762   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.703823   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.711311   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:23:49.721987   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:23:49.732874   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.737289   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.737351   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.745312   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:23:49.756384   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:23:49.760242   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:23:49.760315   69358 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0919 22:23:49.760415   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:23:49.760438   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:23:49.760476   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:23:49.773427   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:23:49.773499   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:23:49.773549   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:23:49.784237   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:23:49.784306   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:23:49.794534   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:23:49.814529   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:23:49.837846   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:23:49.859421   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:23:49.863859   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:49.876721   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:49.948089   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:23:49.971010   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:49.971327   69358 start.go:317] joinCluster: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:49.971508   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:23:49.971618   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:49.992535   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:50.137695   69358 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:50.137740   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kb90tj.om7zof6htice1y8z --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:24:08.633363   69358 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kb90tj.om7zof6htice1y8z --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.495537277s)
	I0919 22:24:08.633404   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:24:08.849981   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307-m02 minikube.k8s.io/updated_at=2025_09_19T22_24_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=false
	I0919 22:24:08.928109   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-326307-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:24:09.011507   69358 start.go:319] duration metric: took 19.040175049s to joinCluster
	I0919 22:24:09.011590   69358 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:09.011816   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:09.013756   69358 out.go:179] * Verifying Kubernetes components...
	I0919 22:24:09.015232   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:09.115618   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:09.130578   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:24:09.130645   69358 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:24:09.130869   69358 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m02" to be "Ready" ...
	W0919 22:24:11.134373   69358 node_ready.go:57] node "ha-326307-m02" has "Ready":"False" status (will retry)
	I0919 22:24:11.634655   69358 node_ready.go:49] node "ha-326307-m02" is "Ready"
	I0919 22:24:11.634683   69358 node_ready.go:38] duration metric: took 2.503796185s for node "ha-326307-m02" to be "Ready" ...
	I0919 22:24:11.634697   69358 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:24:11.634751   69358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:24:11.647782   69358 api_server.go:72] duration metric: took 2.636155477s to wait for apiserver process to appear ...
	I0919 22:24:11.647812   69358 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:24:11.647848   69358 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:24:11.652005   69358 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:24:11.652952   69358 api_server.go:141] control plane version: v1.34.0
	I0919 22:24:11.652975   69358 api_server.go:131] duration metric: took 5.15649ms to wait for apiserver health ...
	I0919 22:24:11.652984   69358 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:24:11.657535   69358 system_pods.go:59] 17 kube-system pods found
	I0919 22:24:11.657569   69358 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:11.657577   69358 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:11.657581   69358 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:11.657586   69358 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Pending
	I0919 22:24:11.657591   69358 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:11.657598   69358 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Pending: PodScheduled:SchedulerError (pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed)
	I0919 22:24:11.657604   69358 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:11.657609   69358 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Pending
	I0919 22:24:11.657616   69358 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:11.657621   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Pending
	I0919 22:24:11.657626   69358 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:11.657636   69358 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q8mtj": pod kube-proxy-q8mtj is already assigned to node "ha-326307-m02")
	I0919 22:24:11.657642   69358 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:11.657649   69358 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Pending
	I0919 22:24:11.657654   69358 system_pods.go:61] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:11.657660   69358 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Pending
	I0919 22:24:11.657665   69358 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:11.657673   69358 system_pods.go:74] duration metric: took 4.68298ms to wait for pod list to return data ...
	I0919 22:24:11.657687   69358 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:24:11.660430   69358 default_sa.go:45] found service account: "default"
	I0919 22:24:11.660456   69358 default_sa.go:55] duration metric: took 2.762581ms for default service account to be created ...
	I0919 22:24:11.660467   69358 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:24:11.664515   69358 system_pods.go:86] 17 kube-system pods found
	I0919 22:24:11.664549   69358 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:11.664557   69358 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:11.664563   69358 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:11.664567   69358 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Pending
	I0919 22:24:11.664574   69358 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:11.664583   69358 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Pending: PodScheduled:SchedulerError (pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed)
	I0919 22:24:11.664590   69358 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:11.664594   69358 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Pending
	I0919 22:24:11.664600   69358 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:11.664606   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Pending
	I0919 22:24:11.664615   69358 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:11.664623   69358 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q8mtj": pod kube-proxy-q8mtj is already assigned to node "ha-326307-m02")
	I0919 22:24:11.664629   69358 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:11.664637   69358 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Pending
	I0919 22:24:11.664643   69358 system_pods.go:89] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:11.664649   69358 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Pending
	I0919 22:24:11.664653   69358 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:11.664663   69358 system_pods.go:126] duration metric: took 4.189005ms to wait for k8s-apps to be running ...
	I0919 22:24:11.664676   69358 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:24:11.664734   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:24:11.677679   69358 system_svc.go:56] duration metric: took 12.991783ms WaitForService to wait for kubelet
	I0919 22:24:11.677718   69358 kubeadm.go:578] duration metric: took 2.666095008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:11.677741   69358 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:24:11.681219   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:11.681249   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:11.681276   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:11.681282   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:11.681288   69358 node_conditions.go:105] duration metric: took 3.540774ms to run NodePressure ...
	I0919 22:24:11.681302   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:24:11.681336   69358 start.go:255] writing updated cluster config ...
	I0919 22:24:11.683465   69358 out.go:203] 
	I0919 22:24:11.685336   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:11.685480   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:11.687190   69358 out.go:179] * Starting "ha-326307-m03" control-plane node in "ha-326307" cluster
	I0919 22:24:11.688774   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:24:11.690230   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:11.691529   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:24:11.691564   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:11.691570   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:11.691776   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:11.691792   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:24:11.691940   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:11.714494   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:11.714516   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:11.714538   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:11.714564   69358 start.go:360] acquireMachinesLock for ha-326307-m03: {Name:mk07818636650a6efffb19d787e84a34d6f1dd98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:11.714717   69358 start.go:364] duration metric: took 129.412µs to acquireMachinesLock for "ha-326307-m03"
	I0919 22:24:11.714749   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:11.714883   69358 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:24:11.717146   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:11.717288   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:24:11.717325   69358 client.go:168] LocalClient.Create starting
	I0919 22:24:11.717396   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:24:11.717429   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:11.717444   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:11.717499   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:24:11.717523   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:11.717531   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:11.717757   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:11.736709   69358 network_create.go:77] Found existing network {name:ha-326307 subnet:0xc001c6a9f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:11.736749   69358 kic.go:121] calculated static IP "192.168.49.4" for the "ha-326307-m03" container
	I0919 22:24:11.736838   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:11.757855   69358 cli_runner.go:164] Run: docker volume create ha-326307-m03 --label name.minikube.sigs.k8s.io=ha-326307-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:11.780198   69358 oci.go:103] Successfully created a docker volume ha-326307-m03
	I0919 22:24:11.780287   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m03 --entrypoint /usr/bin/test -v ha-326307-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:12.269719   69358 oci.go:107] Successfully prepared a docker volume ha-326307-m03
	I0919 22:24:12.269772   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:24:12.269795   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:12.269864   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:16.658999   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.389088771s)
	I0919 22:24:16.659030   69358 kic.go:203] duration metric: took 4.389232064s to extract preloaded images to volume ...
	W0919 22:24:16.659114   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:16.659151   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:16.659211   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:16.714324   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307-m03 --name ha-326307-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307-m03 --network ha-326307 --ip 192.168.49.4 --volume ha-326307-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:17.029039   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Running}}
	I0919 22:24:17.050534   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.070017   69358 cli_runner.go:164] Run: docker exec ha-326307-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:17.125252   69358 oci.go:144] the created container "ha-326307-m03" has a running status.
	I0919 22:24:17.125293   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa...
	I0919 22:24:17.618351   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:17.618395   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:17.646956   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.667176   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:17.667203   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:17.713667   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.734276   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:17.734370   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:17.755726   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:17.755941   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:17.755953   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:17.894482   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:24:17.894512   69358 ubuntu.go:182] provisioning hostname "ha-326307-m03"
	I0919 22:24:17.894572   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:17.914204   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:17.914507   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:17.914530   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m03 && echo "ha-326307-m03" | sudo tee /etc/hostname
	I0919 22:24:18.068724   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:24:18.068805   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.088244   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:18.088504   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:18.088525   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:18.227353   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:18.227390   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:24:18.227421   69358 ubuntu.go:190] setting up certificates
	I0919 22:24:18.227433   69358 provision.go:84] configureAuth start
	I0919 22:24:18.227496   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.247948   69358 provision.go:143] copyHostCerts
	I0919 22:24:18.247989   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:24:18.248023   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:24:18.248029   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:24:18.248096   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:24:18.248231   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:24:18.248289   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:24:18.248299   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:24:18.248338   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:24:18.248404   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:24:18.248423   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:24:18.248427   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:24:18.248457   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:24:18.248512   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m03 san=[127.0.0.1 192.168.49.4 ha-326307-m03 localhost minikube]
	I0919 22:24:18.393257   69358 provision.go:177] copyRemoteCerts
	I0919 22:24:18.393319   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:18.393353   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.412748   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.514005   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:18.514092   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:24:18.542657   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:18.542733   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:18.569691   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:18.569759   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:18.596329   69358 provision.go:87] duration metric: took 368.876183ms to configureAuth
	I0919 22:24:18.596357   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:18.596551   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:18.596562   69358 machine.go:96] duration metric: took 862.263986ms to provisionDockerMachine
	I0919 22:24:18.596567   69358 client.go:171] duration metric: took 6.879237415s to LocalClient.Create
	I0919 22:24:18.596586   69358 start.go:167] duration metric: took 6.879300568s to libmachine.API.Create "ha-326307"
	I0919 22:24:18.596594   69358 start.go:293] postStartSetup for "ha-326307-m03" (driver="docker")
	I0919 22:24:18.596602   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:18.596644   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:18.596677   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.615349   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.717907   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:18.722093   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:18.722137   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:18.722150   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:18.722173   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:18.722186   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:24:18.722248   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:24:18.722356   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:24:18.722372   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:24:18.722580   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:18.732899   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:24:18.766453   69358 start.go:296] duration metric: took 169.843532ms for postStartSetup
	I0919 22:24:18.766899   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.786322   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:18.786775   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:18.786833   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.806377   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.901798   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:18.907121   69358 start.go:128] duration metric: took 7.192223106s to createHost
	I0919 22:24:18.907180   69358 start.go:83] releasing machines lock for "ha-326307-m03", held for 7.192445142s
	I0919 22:24:18.907266   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.929545   69358 out.go:179] * Found network options:
	I0919 22:24:18.931020   69358 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:24:18.932299   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932334   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932375   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932396   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:18.932501   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:18.932558   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.932588   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:18.932662   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.952990   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.953400   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:19.131622   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:19.165991   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:19.166079   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:19.197850   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:19.197878   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:24:19.197909   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:19.197960   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:24:19.211538   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:19.223959   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:24:19.224009   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:24:19.239088   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:24:19.254102   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:24:19.328965   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:24:19.406808   69358 docker.go:234] disabling docker service ...
	I0919 22:24:19.406888   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:24:19.425948   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:24:19.438801   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:24:19.510941   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:24:19.581470   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:19.594683   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:19.613666   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:19.627192   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:19.638603   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:19.638668   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:19.649965   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:19.661530   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:19.673111   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:19.684782   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:19.696056   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:19.707630   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:19.719687   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:19.731477   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:19.741738   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:19.751963   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:19.822277   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:19.931918   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:24:19.931995   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:24:19.936531   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:24:19.936591   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:24:19.940632   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:19.977944   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:24:19.978013   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:24:20.003290   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:24:20.032714   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:24:20.034190   69358 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:20.035560   69358 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:24:20.036915   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:20.055444   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:20.059762   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:20.072851   69358 mustload.go:65] Loading cluster: ha-326307
	I0919 22:24:20.073081   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:20.073298   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:24:20.091365   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:24:20.091605   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.4
	I0919 22:24:20.091616   69358 certs.go:194] generating shared ca certs ...
	I0919 22:24:20.091629   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.091746   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:24:20.091786   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:24:20.091796   69358 certs.go:256] generating profile certs ...
	I0919 22:24:20.091865   69358 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:24:20.091891   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604
	I0919 22:24:20.091905   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:24:20.372898   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 ...
	I0919 22:24:20.372943   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604: {Name:mk9b724916886d4c69140cc45e23ce082460d116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.373186   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604 ...
	I0919 22:24:20.373210   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604: {Name:mkfc0cd42f96faa2f697a81fc7ca671182c3cea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.373311   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:24:20.373471   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:24:20.373649   69358 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:24:20.373668   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:20.373682   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:20.373692   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:20.373703   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:20.373713   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:20.373723   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:20.373733   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:20.373743   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:20.373795   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:24:20.373823   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:20.373832   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:24:20.373856   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:24:20.373878   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:20.373899   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:20.373936   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:24:20.373962   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:24:20.373976   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:20.373987   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:24:20.374034   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:24:20.394051   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:24:20.484593   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:20.489010   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:20.503471   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:20.507649   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:24:20.522195   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:20.526410   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:20.541840   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:20.546043   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:24:20.560364   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:20.564230   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:20.577547   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:20.581387   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:20.594800   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:20.622991   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:20.651461   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:20.678113   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:20.705292   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:24:20.732489   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:20.762310   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:20.789808   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:20.819251   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:24:20.851010   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:20.879714   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:24:20.908177   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:20.928644   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:24:20.949340   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:20.969391   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:24:20.989837   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:21.011118   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:21.031485   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:21.052354   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:24:21.058486   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:24:21.069582   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.074372   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.074440   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.082186   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:21.092957   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:24:21.104085   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.108193   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.108258   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.116078   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:21.127607   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:21.139338   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.143794   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.143848   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.151321   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:21.162759   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:21.166499   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:21.166555   69358 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0919 22:24:21.166642   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:21.166677   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:21.166738   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:21.180123   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:21.180202   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:21.180261   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:21.189900   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:21.189963   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:21.200336   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:24:21.220715   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:21.244525   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:21.268789   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:21.272885   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:21.285764   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:21.362911   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:21.394403   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:24:21.394691   69358 start.go:317] joinCluster: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.394850   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:21.394898   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:24:21.419020   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:24:21.569927   69358 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:21.569980   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uhifqr.okdtfjqzhuoxbb2e --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:24:32.089764   69358 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uhifqr.okdtfjqzhuoxbb2e --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.519762438s)
	I0919 22:24:32.089793   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:24:32.309566   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307-m03 minikube.k8s.io/updated_at=2025_09_19T22_24_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=false
	I0919 22:24:32.391142   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-326307-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:24:32.471336   69358 start.go:319] duration metric: took 11.076641052s to joinCluster
	I0919 22:24:32.471402   69358 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:32.471770   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:32.473461   69358 out.go:179] * Verifying Kubernetes components...
	I0919 22:24:32.475427   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:32.579664   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:32.593786   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:24:32.593856   69358 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:24:32.594084   69358 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m03" to be "Ready" ...
	W0919 22:24:34.597297   69358 node_ready.go:57] node "ha-326307-m03" has "Ready":"False" status (will retry)
	I0919 22:24:35.098269   69358 node_ready.go:49] node "ha-326307-m03" is "Ready"
	I0919 22:24:35.098296   69358 node_ready.go:38] duration metric: took 2.504196997s for node "ha-326307-m03" to be "Ready" ...
	I0919 22:24:35.098310   69358 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:24:35.098358   69358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:24:35.111440   69358 api_server.go:72] duration metric: took 2.640014462s to wait for apiserver process to appear ...
	I0919 22:24:35.111465   69358 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:24:35.111483   69358 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:24:35.115724   69358 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:24:35.116810   69358 api_server.go:141] control plane version: v1.34.0
	I0919 22:24:35.116837   69358 api_server.go:131] duration metric: took 5.364462ms to wait for apiserver health ...
	I0919 22:24:35.116849   69358 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:24:35.123343   69358 system_pods.go:59] 27 kube-system pods found
	I0919 22:24:35.123372   69358 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:35.123377   69358 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:35.123380   69358 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:35.123384   69358 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:24:35.123387   69358 system_pods.go:61] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Pending
	I0919 22:24:35.123390   69358 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:35.123393   69358 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:24:35.123400   69358 system_pods.go:61] "kindnet-pnj9r" [14a458fc-0e9d-42e9-9473-f7f2a6f7f571] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-pnj9r": pod kindnet-pnj9r is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123408   69358 system_pods.go:61] "kindnet-qxwpq" [173e48ec-ef56-4824-9f55-a04b199b7943] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qxwpq": pod kindnet-qxwpq is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123416   69358 system_pods.go:61] "kindnet-wcct9" [5472dcae-344b-43fb-84d1-8d0d41852cd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wcct9": pod kindnet-wcct9 is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123427   69358 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:35.123433   69358 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:24:35.123445   69358 system_pods.go:61] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Pending
	I0919 22:24:35.123450   69358 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:35.123454   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running
	I0919 22:24:35.123457   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Pending
	I0919 22:24:35.123461   69358 system_pods.go:61] "kube-proxy-6nmjx" [81414747-6c4e-495e-a28d-cb17f0c0c306] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-6nmjx": pod kube-proxy-6nmjx is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123465   69358 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:35.123469   69358 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:24:35.123472   69358 system_pods.go:61] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ws89d": pod kube-proxy-ws89d is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123477   69358 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:35.123481   69358 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running
	I0919 22:24:35.123487   69358 system_pods.go:61] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Pending
	I0919 22:24:35.123489   69358 system_pods.go:61] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:35.123492   69358 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:24:35.123496   69358 system_pods.go:61] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Pending
	I0919 22:24:35.123503   69358 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:35.123511   69358 system_pods.go:74] duration metric: took 6.65469ms to wait for pod list to return data ...
	I0919 22:24:35.123525   69358 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:24:35.126592   69358 default_sa.go:45] found service account: "default"
	I0919 22:24:35.126616   69358 default_sa.go:55] duration metric: took 3.083846ms for default service account to be created ...
	I0919 22:24:35.126627   69358 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:24:35.131895   69358 system_pods.go:86] 27 kube-system pods found
	I0919 22:24:35.131928   69358 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:35.131936   69358 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:35.131941   69358 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:35.131946   69358 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:24:35.131950   69358 system_pods.go:89] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Pending
	I0919 22:24:35.131954   69358 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:35.131959   69358 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:24:35.131968   69358 system_pods.go:89] "kindnet-pnj9r" [14a458fc-0e9d-42e9-9473-f7f2a6f7f571] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-pnj9r": pod kindnet-pnj9r is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131975   69358 system_pods.go:89] "kindnet-qxwpq" [173e48ec-ef56-4824-9f55-a04b199b7943] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qxwpq": pod kindnet-qxwpq is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131986   69358 system_pods.go:89] "kindnet-wcct9" [5472dcae-344b-43fb-84d1-8d0d41852cd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wcct9": pod kindnet-wcct9 is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131993   69358 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:35.132003   69358 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:24:35.132009   69358 system_pods.go:89] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Pending
	I0919 22:24:35.132015   69358 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:35.132022   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running
	I0919 22:24:35.132028   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Pending
	I0919 22:24:35.132035   69358 system_pods.go:89] "kube-proxy-6nmjx" [81414747-6c4e-495e-a28d-cb17f0c0c306] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-6nmjx": pod kube-proxy-6nmjx is already assigned to node "ha-326307-m03")
	I0919 22:24:35.132044   69358 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:35.132050   69358 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:24:35.132057   69358 system_pods.go:89] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ws89d": pod kube-proxy-ws89d is already assigned to node "ha-326307-m03")
	I0919 22:24:35.132067   69358 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:35.132076   69358 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running
	I0919 22:24:35.132082   69358 system_pods.go:89] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Pending
	I0919 22:24:35.132090   69358 system_pods.go:89] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:35.132096   69358 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:24:35.132101   69358 system_pods.go:89] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Pending
	I0919 22:24:35.132107   69358 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:35.132117   69358 system_pods.go:126] duration metric: took 5.483041ms to wait for k8s-apps to be running ...
	I0919 22:24:35.132130   69358 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:24:35.132201   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:24:35.145901   69358 system_svc.go:56] duration metric: took 13.762213ms WaitForService to wait for kubelet
	I0919 22:24:35.145934   69358 kubeadm.go:578] duration metric: took 2.67451015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:35.145953   69358 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:24:35.149091   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149114   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149122   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149126   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149129   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149133   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149137   69358 node_conditions.go:105] duration metric: took 3.180117ms to run NodePressure ...
	I0919 22:24:35.149147   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:24:35.149187   69358 start.go:255] writing updated cluster config ...
	I0919 22:24:35.149520   69358 ssh_runner.go:195] Run: rm -f paused
	I0919 22:24:35.153920   69358 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:24:35.154452   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:35.158459   69358 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9j5pw" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.164361   69358 pod_ready.go:94] pod "coredns-66bc5c9577-9j5pw" is "Ready"
	I0919 22:24:35.164388   69358 pod_ready.go:86] duration metric: took 5.90604ms for pod "coredns-66bc5c9577-9j5pw" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.164396   69358 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wqvzd" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.170275   69358 pod_ready.go:94] pod "coredns-66bc5c9577-wqvzd" is "Ready"
	I0919 22:24:35.170305   69358 pod_ready.go:86] duration metric: took 5.903438ms for pod "coredns-66bc5c9577-wqvzd" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.221651   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.227692   69358 pod_ready.go:94] pod "etcd-ha-326307" is "Ready"
	I0919 22:24:35.227721   69358 pod_ready.go:86] duration metric: took 6.035355ms for pod "etcd-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.227738   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.234705   69358 pod_ready.go:94] pod "etcd-ha-326307-m02" is "Ready"
	I0919 22:24:35.234755   69358 pod_ready.go:86] duration metric: took 6.991962ms for pod "etcd-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.234769   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.355285   69358 request.go:683] "Waited before sending request" delay="120.371513ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326307-m03"
	I0919 22:24:35.555444   69358 request.go:683] "Waited before sending request" delay="196.344855ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:35.955374   69358 request.go:683] "Waited before sending request" delay="196.276117ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:35.958866   69358 pod_ready.go:94] pod "etcd-ha-326307-m03" is "Ready"
	I0919 22:24:35.958897   69358 pod_ready.go:86] duration metric: took 724.121102ms for pod "etcd-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.155371   69358 request.go:683] "Waited before sending request" delay="196.353052ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:24:36.158952   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.355354   69358 request.go:683] "Waited before sending request" delay="196.272183ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307"
	I0919 22:24:36.555231   69358 request.go:683] "Waited before sending request" delay="196.389456ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307"
	I0919 22:24:36.558900   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307" is "Ready"
	I0919 22:24:36.558927   69358 pod_ready.go:86] duration metric: took 399.940435ms for pod "kube-apiserver-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.558936   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.755357   69358 request.go:683] "Waited before sending request" delay="196.333509ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307-m02"
	I0919 22:24:36.955622   69358 request.go:683] "Waited before sending request" delay="196.371107ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:36.958850   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307-m02" is "Ready"
	I0919 22:24:36.958881   69358 pod_ready.go:86] duration metric: took 399.937855ms for pod "kube-apiserver-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.958892   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.155391   69358 request.go:683] "Waited before sending request" delay="196.40338ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307-m03"
	I0919 22:24:37.355336   69358 request.go:683] "Waited before sending request" delay="196.255836ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:37.358527   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307-m03" is "Ready"
	I0919 22:24:37.358558   69358 pod_ready.go:86] duration metric: took 399.659411ms for pod "kube-apiserver-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.555013   69358 request.go:683] "Waited before sending request" delay="196.298446ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0919 22:24:37.559362   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.755832   69358 request.go:683] "Waited before sending request" delay="196.350309ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307"
	I0919 22:24:37.954837   69358 request.go:683] "Waited before sending request" delay="195.286624ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307"
	I0919 22:24:37.958236   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307" is "Ready"
	I0919 22:24:37.958266   69358 pod_ready.go:86] duration metric: took 398.878465ms for pod "kube-controller-manager-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.958274   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.155758   69358 request.go:683] "Waited before sending request" delay="197.394867ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m02"
	I0919 22:24:38.355929   69358 request.go:683] "Waited before sending request" delay="196.396129ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:38.359268   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307-m02" is "Ready"
	I0919 22:24:38.359292   69358 pod_ready.go:86] duration metric: took 401.013168ms for pod "kube-controller-manager-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.359301   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.555606   69358 request.go:683] "Waited before sending request" delay="196.234039ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m03"
	I0919 22:24:38.755574   69358 request.go:683] "Waited before sending request" delay="196.387697ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:38.955366   69358 request.go:683] "Waited before sending request" delay="95.227976ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m03"
	I0919 22:24:39.154881   69358 request.go:683] "Waited before sending request" delay="196.301821ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:39.555649   69358 request.go:683] "Waited before sending request" delay="192.377634ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:39.955251   69358 request.go:683] "Waited before sending request" delay="92.286577ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	W0919 22:24:40.366591   69358 pod_ready.go:104] pod "kube-controller-manager-ha-326307-m03" is not "Ready", error: <nil>
	W0919 22:24:42.367386   69358 pod_ready.go:104] pod "kube-controller-manager-ha-326307-m03" is not "Ready", error: <nil>
	I0919 22:24:43.367824   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307-m03" is "Ready"
	I0919 22:24:43.367860   69358 pod_ready.go:86] duration metric: took 5.00855284s for pod "kube-controller-manager-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.371145   69358 pod_ready.go:83] waiting for pod "kube-proxy-8kxtv" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.376946   69358 pod_ready.go:94] pod "kube-proxy-8kxtv" is "Ready"
	I0919 22:24:43.376975   69358 pod_ready.go:86] duration metric: took 5.786362ms for pod "kube-proxy-8kxtv" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.376985   69358 pod_ready.go:83] waiting for pod "kube-proxy-q8mtj" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.555396   69358 request.go:683] "Waited before sending request" delay="178.323112ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8mtj"
	I0919 22:24:43.755331   69358 request.go:683] "Waited before sending request" delay="196.35612ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:43.758666   69358 pod_ready.go:94] pod "kube-proxy-q8mtj" is "Ready"
	I0919 22:24:43.758695   69358 pod_ready.go:86] duration metric: took 381.70368ms for pod "kube-proxy-q8mtj" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.758704   69358 pod_ready.go:83] waiting for pod "kube-proxy-ws89d" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.955265   69358 request.go:683] "Waited before sending request" delay="196.399278ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ws89d"
	I0919 22:24:44.155007   69358 request.go:683] "Waited before sending request" delay="196.303687ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:44.354881   69358 request.go:683] "Waited before sending request" delay="95.2124ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ws89d"
	I0919 22:24:44.555609   69358 request.go:683] "Waited before sending request" delay="197.246504ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:44.955613   69358 request.go:683] "Waited before sending request" delay="192.471154ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:45.355390   69358 request.go:683] "Waited before sending request" delay="92.281537ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	W0919 22:24:45.765195   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:48.265294   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:50.765471   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:53.265410   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:55.265474   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:57.765267   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:59.765483   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:02.266617   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:04.766256   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:07.265177   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:09.265694   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:11.765032   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:13.765313   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:15.766278   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	I0919 22:25:17.764644   69358 pod_ready.go:94] pod "kube-proxy-ws89d" is "Ready"
	I0919 22:25:17.764670   69358 pod_ready.go:86] duration metric: took 34.005951783s for pod "kube-proxy-ws89d" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.767738   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.772985   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307" is "Ready"
	I0919 22:25:17.773015   69358 pod_ready.go:86] duration metric: took 5.246042ms for pod "kube-scheduler-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.773023   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.778916   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307-m02" is "Ready"
	I0919 22:25:17.778942   69358 pod_ready.go:86] duration metric: took 5.914033ms for pod "kube-scheduler-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.778951   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.784122   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307-m03" is "Ready"
	I0919 22:25:17.784165   69358 pod_ready.go:86] duration metric: took 5.193982ms for pod "kube-scheduler-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.784183   69358 pod_ready.go:40] duration metric: took 42.630226972s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:17.833559   69358 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 22:25:17.835536   69358 out.go:179] * Done! kubectl is now configured to use "ha-326307" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7791f71e5d5a5       8c811b4aec35f       12 minutes ago      Running             busybox                   0                   b5e0c0fffea25       busybox-7b57f96db7-m8swj
	ca68bbc020e20       52546a367cc9e       13 minutes ago      Running             coredns                   0                   132023f334782       coredns-66bc5c9577-9j5pw
	1f618dc8f0392       52546a367cc9e       13 minutes ago      Running             coredns                   0                   a5ac32b4949ab       coredns-66bc5c9577-wqvzd
	f52d2d9f5881b       6e38f40d628db       13 minutes ago      Running             storage-provisioner       0                   7b77cca917bf4       storage-provisioner
	365cc00c2e009       409467f978b4a       13 minutes ago      Running             kindnet-cni               0                   96e027ec2b5fb       kindnet-gxnzs
	bd9e41958ffbb       df0860106674d       13 minutes ago      Running             kube-proxy                0                   06da62af16945       kube-proxy-8kxtv
	c6c963d9a0cae       765655ea60781       13 minutes ago      Running             kube-vip                  0                   5717652da0ef4       kube-vip-ha-326307
	456a0c3cbf5ce       46169d968e920       13 minutes ago      Running             kube-scheduler            0                   f02b9e82ff9b1       kube-scheduler-ha-326307
	05ab0247624a7       a0af72f2ec6d6       13 minutes ago      Running             kube-controller-manager   0                   6026f58e8c23a       kube-controller-manager-ha-326307
	e5c59a6abe977       5f1f5298c888d       13 minutes ago      Running             etcd                      0                   5f89382a468ad       etcd-ha-326307
	e80d65e3c7c18       90550c43ad2bc       13 minutes ago      Running             kube-apiserver            0                   3813626701bd1       kube-apiserver-ha-326307
	
	
	==> containerd <==
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.754439323Z" level=info msg="CreateContainer within sandbox \"a5ac32b4949abcc8c1007cd2947e92633d80d759aeaf0e7d6b490f2610f81170\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.768027085Z" level=info msg="CreateContainer within sandbox \"a5ac32b4949abcc8c1007cd2947e92633d80d759aeaf0e7d6b490f2610f81170\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\""
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.768844132Z" level=info msg="StartContainer for \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\""
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.836885904Z" level=info msg="StartContainer for \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\" returns successfully"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.632881043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9j5pw,Uid:7d073e38-b63e-494d-bda0-3dde372a950b,Namespace:kube-system,Attempt:0,}"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.759782586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9j5pw,Uid:7d073e38-b63e-494d-bda0-3dde372a950b,Namespace:kube-system,Attempt:0,} returns sandbox id \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.765750080Z" level=info msg="CreateContainer within sandbox \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.779792584Z" level=info msg="CreateContainer within sandbox \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.780572301Z" level=info msg="StartContainer for \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.854015268Z" level=info msg="StartContainer for \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\" returns successfully"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.151709073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-m8swj,Uid:7533a5f9-7c6d-4476-9e03-eb8abe0aadbc,Namespace:default,Attempt:0,}"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.267660233Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.268098400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-m8swj,Uid:7533a5f9-7c6d-4476-9e03-eb8abe0aadbc,Namespace:default,Attempt:0,} returns sandbox id \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\""
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.270196453Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.412014033Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.413088793Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.414707234Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.417602556Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.418335313Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 2.148090964s"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.418383876Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.423388311Z" level=info msg="CreateContainer within sandbox \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.442455841Z" level=info msg="CreateContainer within sandbox \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.443119612Z" level=info msg="StartContainer for \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.497884940Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.500641712Z" level=info msg="StartContainer for \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\" returns successfully"
	
	
	==> coredns [1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54337 - 24572 "HINFO IN 5143313645322175939.5313042790825403134. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069732464s
	[INFO] 10.244.0.4:35490 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000326279s
	[INFO] 10.244.0.4:55330 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.014239882s
	[INFO] 10.244.1.2:39628 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210602s
	[INFO] 10.244.1.2:46891 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.001261026s
	[INFO] 10.244.1.2:43124 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.00098216s
	[INFO] 10.244.1.2:49555 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.00013424s
	[INFO] 10.244.0.4:40362 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00024135s
	[INFO] 10.244.0.4:45629 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168694s
	[INFO] 10.244.1.2:52354 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189457s
	[INFO] 10.244.1.2:43857 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161715s
	[INFO] 10.244.1.2:51922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145764s
	[INFO] 10.244.1.2:57320 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009888497s
	[INFO] 10.244.1.2:49841 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169285s
	[INFO] 10.244.0.4:51548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159656s
	[INFO] 10.244.0.4:48681 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110507s
	[INFO] 10.244.1.2:52993 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137337s
	[INFO] 10.244.0.4:59855 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113524s
	[INFO] 10.244.1.2:56284 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188608s
	[INFO] 10.244.1.2:58675 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149085s
	[INFO] 10.244.1.2:38911 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131505s
	
	
	==> coredns [ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93] <==
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49588 - 50300 "HINFO IN 9047056621409016881.2982736294753326061. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063128768s
	[INFO] 10.244.0.4:39328 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.014249598s
	[INFO] 10.244.0.4:59759 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.013332957s
	[INFO] 10.244.0.4:50336 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.008865788s
	[INFO] 10.244.1.2:42753 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000158745s
	[INFO] 10.244.0.4:52334 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261159s
	[INFO] 10.244.0.4:43558 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010645205s
	[INFO] 10.244.0.4:51059 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154122s
	[INFO] 10.244.0.4:46147 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012594143s
	[INFO] 10.244.0.4:39163 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143755s
	[INFO] 10.244.0.4:57061 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014731s
	[INFO] 10.244.1.2:59502 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000129746s
	[INFO] 10.244.1.2:49570 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172915s
	[INFO] 10.244.1.2:48519 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000175653s
	[INFO] 10.244.0.4:50569 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000326714s
	[INFO] 10.244.0.4:45465 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000234038s
	[INFO] 10.244.1.2:52569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176154s
	[INFO] 10.244.1.2:36719 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000205481s
	[INFO] 10.244.1.2:58705 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195468s
	[INFO] 10.244.0.4:38035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216169s
	[INFO] 10.244.0.4:52287 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129599s
	[INFO] 10.244.0.4:37285 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000186803s
	[INFO] 10.244.1.2:39163 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165716s
	
	
	==> describe nodes <==
	Name:               ha-326307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_23_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:23:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:37:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-326307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 2616418f44a84ee78b49dce19e95d1fb
	  System UUID:                9c3f30ed-68b2-4a1c-af95-9031ae210a78
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-m8swj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-9j5pw             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 coredns-66bc5c9577-wqvzd             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 etcd-ha-326307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-gxnzs                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-326307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-326307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-8kxtv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-326307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-326307                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           12m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	
	
	Name:               ha-326307-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:37:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-326307-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 e4f3b60b3b464269bc193e23d4361613
	  System UUID:                8095cd89-f43b-4d8a-adef-b40d6aaa7ad2
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tfpvf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-326307-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-mk6pv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-326307-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-326307-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-q8mtj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-326307-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-326307-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        13m   kube-proxy       
	  Normal  RegisteredNode  13m   node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode  12m   node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	
	
	Name:               ha-326307-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:37:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-326307-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 1434e19b2a274233a619428a76d99322
	  System UUID:                5814a8d4-c435-490f-8e5e-a8b038e01be7
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-jdczt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-326307-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-dmxl8                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-326307-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-326307-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-ws89d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-326307-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-326307-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  12m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode  12m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode  12m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	
	
	==> dmesg <==
	[Sep19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001853] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001006] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.090013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.467231] i8042: Warning: Keylock active
	[  +0.009747] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004210] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001075] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000901] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000908] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001042] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001465] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.534799] block sda: the capability attribute has been deprecated.
	[  +0.099617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027269] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.089616] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [e5c59a6abe97751de42afd27010936d1c3a401fad6cd730e75a1692a895b4fbc] <==
	{"level":"warn","ts":"2025-09-19T22:24:25.337105Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:25.337366Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:24:25.352476Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5512420eb470d1ce","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-09-19T22:24:25.352519Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.352532Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.355631Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5512420eb470d1ce","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-19T22:24:25.355692Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.355712Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.427429Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.428290Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:24:25.447984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:32950","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:24:25.491427Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(6130034673728934350 12593026477526642892 16449250771884659557)"}
	{"level":"info","ts":"2025-09-19T22:24:25.491593Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.491634Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:24:25.493734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:32956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:25.530775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:32980","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:24:25.607668Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"e4477a6cd7815365","bytes":946167,"size":"946 kB","took":"30.009579431s"}
	{"level":"info","ts":"2025-09-19T22:24:29.797825Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:31.923615Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:35.871798Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:53.749925Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:55.314881Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"5512420eb470d1ce","bytes":1356311,"size":"1.4 MB","took":"30.015547589s"}
	{"level":"info","ts":"2025-09-19T22:33:30.750666Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1558}
	{"level":"info","ts":"2025-09-19T22:33:30.775074Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1558,"took":"23.935678ms","hash":623549535,"current-db-size-bytes":4292608,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":2306048,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-09-19T22:33:30.775132Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":623549535,"revision":1558,"compact-revision":-1}
	
	
	==> kernel <==
	 22:37:28 up  1:19,  0 users,  load average: 1.19, 0.71, 0.74
	Linux ha-326307 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [365cc00c2e009eeed7e71d1202a4d406c12f0d9faee38762ba691eb2d7c71f89] <==
	I0919 22:36:40.991246       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:36:50.998290       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:36:50.998332       1 main.go:301] handling current node
	I0919 22:36:50.998351       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:36:50.998359       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:36:50.998554       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:36:50.998568       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:37:00.996278       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:37:00.996316       1 main.go:301] handling current node
	I0919 22:37:00.996331       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:37:00.996336       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:37:00.996584       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:37:00.996603       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:37:10.992294       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:37:10.992334       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:37:10.992571       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:37:10.992589       1 main.go:301] handling current node
	I0919 22:37:10.992605       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:37:10.992614       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:37:20.990243       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:37:20.990316       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:37:20.990527       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:37:20.990541       1 main.go:301] handling current node
	I0919 22:37:20.990553       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:37:20.990557       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e80d65e3c7c18da87f7fa003e39382f5a4285ba4782fc295197421c6b882a161] <==
	I0919 22:30:53.858237       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:15.996526       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:22.110278       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:33:31.733595       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:33:36.316232       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:33:41.440724       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:43.430235       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:04.843923       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:47.576277       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:36:07.778568       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:37:07.288814       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:37:22.531524       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43412: use of closed network connection
	E0919 22:37:22.776721       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43434: use of closed network connection
	E0919 22:37:22.970082       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43448: use of closed network connection
	E0919 22:37:23.110093       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43464: use of closed network connection
	E0919 22:37:23.308629       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43484: use of closed network connection
	E0919 22:37:23.494833       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43500: use of closed network connection
	E0919 22:37:23.634448       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43520: use of closed network connection
	E0919 22:37:23.803885       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43532: use of closed network connection
	E0919 22:37:23.968210       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43546: use of closed network connection
	E0919 22:37:26.548300       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43614: use of closed network connection
	E0919 22:37:26.721861       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43630: use of closed network connection
	E0919 22:37:26.901556       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43648: use of closed network connection
	E0919 22:37:27.077249       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43672: use of closed network connection
	E0919 22:37:27.253310       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43700: use of closed network connection
	
	
	==> kube-controller-manager [05ab0247624a7f8ffa6bc948e3abc3adc49911c297291eb0a6dd42e3df39f4cd] <==
	I0919 22:23:38.744532       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:23:38.744726       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0919 22:23:38.744739       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0919 22:23:38.744729       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:23:38.744737       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:23:38.744759       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 22:23:38.745195       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 22:23:38.745255       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:23:38.746448       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:23:38.748706       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 22:23:38.750017       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.750086       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:23:38.751270       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 22:23:38.760899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 22:23:38.760926       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 22:23:38.760971       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 22:23:38.765332       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.771790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:08.307746       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m02\" does not exist"
	I0919 22:24:08.319829       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:24:08.699971       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m02"
	E0919 22:24:31.036531       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8ztpb failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8ztpb\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:24:31.706808       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m03\" does not exist"
	I0919 22:24:31.736561       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:24:33.715916       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m03"
	
	
	==> kube-proxy [bd9e41958ffbbb27dab3d180a56fb27df36a1a1896db3a6f322a8aabcda57677] <==
	I0919 22:23:40.183862       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:23:40.251957       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:23:40.353105       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:23:40.353291       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:23:40.353503       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:23:40.383440       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:23:40.383522       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:23:40.391534       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:23:40.391999       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:23:40.392045       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:23:40.394189       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:23:40.394304       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:23:40.394470       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:23:40.394480       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:23:40.394266       1 config.go:200] "Starting service config controller"
	I0919 22:23:40.394506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:23:40.394279       1 config.go:309] "Starting node config controller"
	I0919 22:23:40.394533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:23:40.394540       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:23:40.494617       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:23:40.494643       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:23:40.494649       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [456a0c3cbf5ce028b9cbac658728c1fee13ad8e2659bfa0c625cd685d711c708] <==
	E0919 22:23:33.115040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:23:33.129278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:23:33.194774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:23:33.337699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0919 22:23:35.055089       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:24:08.346116       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:08.346301       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:08.365410       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-78xs2" node="ha-326307-m02"
	E0919 22:24:08.365600       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="kube-system/kindnet-78xs2"
	E0919 22:24:08.379248       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"kindnet-78xs2\" not found" pod="kube-system/kindnet-78xs2"
	E0919 22:24:10.002296       1 schedule_one.go:975] "Scheduler cache AssumePod failed" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:10.002334       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	I0919 22:24:10.002368       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:31.751287       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-pnj9r" node="ha-326307-m03"
	E0919 22:24:31.751375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-pnj9r"
	E0919 22:24:31.887089       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:31.887576       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 173e48ec-ef56-4824-9f55-a04b199b7943(kube-system/kindnet-qxwpq) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	E0919 22:24:31.887605       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	I0919 22:24:31.888969       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:35.828083       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-nzzlb" node="ha-326307-m03"
	E0919 22:24:35.828187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-nzzlb"
	E0919 22:24:35.839864       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	E0919 22:24:35.839940       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ba4fd407-2e93-4324-ab2d-4f192d79fdf5(kube-system/kindnet-dmxl8) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	E0919 22:24:35.839964       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	I0919 22:24:35.841757       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	
	
	==> kubelet <==
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638035    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-cni-cfg\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638087    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-xtables-lock\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638115    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70be5fcc-7ab6-4eb1-870d-988fee1a01bb-kube-proxy\") pod \"kube-proxy-8kxtv\" (UID: \"70be5fcc-7ab6-4eb1-870d-988fee1a01bb\") " pod="kube-system/kube-proxy-8kxtv"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140870    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64376c4d-1b82-490d-887d-7f628b134014-config-volume\") pod \"coredns-66bc5c9577-wqvzd\" (UID: \"64376c4d-1b82-490d-887d-7f628b134014\") " pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140945    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d073e38-b63e-494d-bda0-3dde372a950b-config-volume\") pod \"coredns-66bc5c9577-9j5pw\" (UID: \"7d073e38-b63e-494d-bda0-3dde372a950b\") " pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140976    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkhk\" (UniqueName: \"kubernetes.io/projected/64376c4d-1b82-490d-887d-7f628b134014-kube-api-access-8tkhk\") pod \"coredns-66bc5c9577-wqvzd\" (UID: \"64376c4d-1b82-490d-887d-7f628b134014\") " pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.141004    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gmbw\" (UniqueName: \"kubernetes.io/projected/7d073e38-b63e-494d-bda0-3dde372a950b-kube-api-access-8gmbw\") pod \"coredns-66bc5c9577-9j5pw\" (UID: \"7d073e38-b63e-494d-bda0-3dde372a950b\") " pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319752    1670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\""
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319858    1670 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\"" pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319884    1670 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\"" pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319966    1670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wqvzd_kube-system(64376c4d-1b82-490d-887d-7f628b134014)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wqvzd_kube-system(64376c4d-1b82-490d-887d-7f628b134014)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\\\": failed to find network info for sandbox \\\"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\\\"\"" pod="kube-system/coredns-66bc5c9577-wqvzd" podUID="64376c4d-1b82-490d-887d-7f628b134014"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332044    1670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\""
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332130    1670 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\"" pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332205    1670 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\"" pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332288    1670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9j5pw_kube-system(7d073e38-b63e-494d-bda0-3dde372a950b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9j5pw_kube-system(7d073e38-b63e-494d-bda0-3dde372a950b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\\\": failed to find network info for sandbox \\\"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\\\"\"" pod="kube-system/coredns-66bc5c9577-9j5pw" podUID="7d073e38-b63e-494d-bda0-3dde372a950b"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.543914    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cafe04c6-2dce-4b93-b6d1-205efc39b360-tmp\") pod \"storage-provisioner\" (UID: \"cafe04c6-2dce-4b93-b6d1-205efc39b360\") " pod="kube-system/storage-provisioner"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.543969    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47vqf\" (UniqueName: \"kubernetes.io/projected/cafe04c6-2dce-4b93-b6d1-205efc39b360-kube-api-access-47vqf\") pod \"storage-provisioner\" (UID: \"cafe04c6-2dce-4b93-b6d1-205efc39b360\") " pod="kube-system/storage-provisioner"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.684901    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gxnzs" podStartSLOduration=1.68487896 podStartE2EDuration="1.68487896s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:40.684630982 +0000 UTC m=+6.151051272" watchObservedRunningTime="2025-09-19 22:23:40.68487896 +0000 UTC m=+6.151299251"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.685802    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8kxtv" podStartSLOduration=1.685781067 podStartE2EDuration="1.685781067s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:40.670987608 +0000 UTC m=+6.137407898" watchObservedRunningTime="2025-09-19 22:23:40.685781067 +0000 UTC m=+6.152201360"
	Sep 19 22:23:41 ha-326307 kubelet[1670]: I0919 22:23:41.676063    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.676036489 podStartE2EDuration="1.676036489s" podCreationTimestamp="2025-09-19 22:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:41.675998333 +0000 UTC m=+7.142418624" watchObservedRunningTime="2025-09-19 22:23:41.676036489 +0000 UTC m=+7.142456778"
	Sep 19 22:23:45 ha-326307 kubelet[1670]: I0919 22:23:45.164667    1670 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:23:45 ha-326307 kubelet[1670]: I0919 22:23:45.165981    1670 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:23:52 ha-326307 kubelet[1670]: I0919 22:23:52.703916    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wqvzd" podStartSLOduration=13.703896267 podStartE2EDuration="13.703896267s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:52.703429297 +0000 UTC m=+18.169849612" watchObservedRunningTime="2025-09-19 22:23:52.703896267 +0000 UTC m=+18.170316558"
	Sep 19 22:23:56 ha-326307 kubelet[1670]: I0919 22:23:56.724956    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9j5pw" podStartSLOduration=17.724936721 podStartE2EDuration="17.724936721s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:56.724564031 +0000 UTC m=+22.190984322" watchObservedRunningTime="2025-09-19 22:23:56.724936721 +0000 UTC m=+22.191357012"
	Sep 19 22:25:18 ha-326307 kubelet[1670]: I0919 22:25:18.904730    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt2kb\" (UniqueName: \"kubernetes.io/projected/7533a5f9-7c6d-4476-9e03-eb8abe0aadbc-kube-api-access-rt2kb\") pod \"busybox-7b57f96db7-m8swj\" (UID: \"7533a5f9-7c6d-4476-9e03-eb8abe0aadbc\") " pod="default/busybox-7b57f96db7-m8swj"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-326307 -n ha-326307
helpers_test.go:269: (dbg) Run:  kubectl --context ha-326307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-jdczt
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-326307 describe pod busybox-7b57f96db7-jdczt
helpers_test.go:290: (dbg) kubectl --context ha-326307 describe pod busybox-7b57f96db7-jdczt:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-jdczt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ha-326307-m03/192.168.49.4
	Start Time:       Fri, 19 Sep 2025 22:25:18 +0000
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwg8l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xwg8l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                  Age                  From               Message
	  ----     ------                  ----                 ----               -------
	  Warning  FailedScheduling        12m                  default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-jdczt": pod busybox-7b57f96db7-jdczt is already assigned to node "ha-326307-m03"
	  Warning  FailedScheduling        12m                  default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-jdczt": pod busybox-7b57f96db7-jdczt is already assigned to node "ha-326307-m03"
	  Normal   Scheduled               12m                  default-scheduler  Successfully assigned default/busybox-7b57f96db7-jdczt to ha-326307-m03
	  Warning  FailedCreatePodSandBox  12m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f949ef20a496c1ef4510b9586bfdf0aa02ea1ca9948f762b5576ef36acab80c9": failed to find network info for sandbox "f949ef20a496c1ef4510b9586bfdf0aa02ea1ca9948f762b5576ef36acab80c9"
	  Warning  FailedCreatePodSandBox  12m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "306b20c8e47aaeb0b6ae068e406157020ceddab45da2b4f2ab7d80c0e47f4391": failed to find network info for sandbox "306b20c8e47aaeb0b6ae068e406157020ceddab45da2b4f2ab7d80c0e47f4391"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e6c50e24733dc1514dd610f1c51f99bc1a57d10929036ae887a87c4b187b9ac1": failed to find network info for sandbox "e6c50e24733dc1514dd610f1c51f99bc1a57d10929036ae887a87c4b187b9ac1"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9d4e7715d1c071862624112264db649229347a018044c9075df60fb9940c8e8a": failed to find network info for sandbox "9d4e7715d1c071862624112264db649229347a018044c9075df60fb9940c8e8a"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d188384b76e1cf43ce05a368351c54023e455d3fd0fddf79dc0d717558b93ee6": failed to find network info for sandbox "d188384b76e1cf43ce05a368351c54023e455d3fd0fddf79dc0d717558b93ee6"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "726dbde7347664ddd373a329867d125e92a7173ca43b01448ce154579a81a0bb": failed to find network info for sandbox "726dbde7347664ddd373a329867d125e92a7173ca43b01448ce154579a81a0bb"
	  Warning  FailedCreatePodSandBox  10m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e45f823e38e8e10ed14077a52e1750763b0c366d8a775bbf53f656c10861f185": failed to find network info for sandbox "e45f823e38e8e10ed14077a52e1750763b0c366d8a775bbf53f656c10861f185"
	  Warning  FailedCreatePodSandBox  10m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "92ae39dcfd89289f5a5fdc5ae0c23196a91b58214916aead7f95620a2697c009": failed to find network info for sandbox "92ae39dcfd89289f5a5fdc5ae0c23196a91b58214916aead7f95620a2697c009"
	  Warning  FailedCreatePodSandBox  10m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "98a14fefd702a6a8ff4d95d8bebac62053e439ee134d840d76e175ba4e8c45d6": failed to find network info for sandbox "98a14fefd702a6a8ff4d95d8bebac62053e439ee134d840d76e175ba4e8c45d6"
	  Warning  FailedCreatePodSandBox  2m3s (x39 over 10m)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "68cb483b1808e73ea325cca055c7a7f1bd2a591a81aa8b2bcb8cb96560fd08b2": failed to find network info for sandbox "68cb483b1808e73ea325cca055c7a7f1bd2a591a81aa8b2bcb8cb96560fd08b2"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (3.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (31.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 node add --alsologtostderr -v 5: exit status 80 (28.885930366s)

                                                
                                                
-- stdout --
	* Adding node m04 to cluster ha-326307 as [worker]
	* Starting "ha-326307-m04" worker node in "ha-326307" cluster
	* Pulling base image v0.0.48 ...
	* Stopping node "ha-326307-m04"  ...
	* Deleting "ha-326307-m04" in docker ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:37:29.496117   85805 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:37:29.496322   85805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:37:29.496336   85805 out.go:374] Setting ErrFile to fd 2...
	I0919 22:37:29.496341   85805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:37:29.496541   85805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:37:29.496895   85805 mustload.go:65] Loading cluster: ha-326307
	I0919 22:37:29.497289   85805 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:37:29.497774   85805 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:37:29.517917   85805 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:37:29.518200   85805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:37:29.577538   85805 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:37:29.566746337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:37:29.577860   85805 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:37:29.599763   85805 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:37:29.600217   85805 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:37:29.619930   85805 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:37:29.620338   85805 api_server.go:166] Checking apiserver status ...
	I0919 22:37:29.620409   85805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:37:29.620449   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:37:29.643116   85805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:37:29.747389   85805 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0919 22:37:29.757879   85805 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:37:29.757946   85805 ssh_runner.go:195] Run: ls
	I0919 22:37:29.761819   85805 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:37:29.766362   85805 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:37:29.768658   85805 out.go:179] * Adding node m04 to cluster ha-326307 as [worker]
	I0919 22:37:29.770279   85805 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:37:29.770471   85805 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:37:29.772724   85805 out.go:179] * Starting "ha-326307-m04" worker node in "ha-326307" cluster
	I0919 22:37:29.774266   85805 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:37:29.775891   85805 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:37:29.777528   85805 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:37:29.777603   85805 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 22:37:29.777614   85805 cache.go:58] Caching tarball of preloaded images
	I0919 22:37:29.777619   85805 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:37:29.777775   85805 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:37:29.777794   85805 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:37:29.777963   85805 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:37:29.801476   85805 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:37:29.801497   85805 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:37:29.801513   85805 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:37:29.801553   85805 start.go:360] acquireMachinesLock for ha-326307-m04: {Name:mk65e4f546dae46b7c7a6cfe6f590e09a0a01676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:37:29.801683   85805 start.go:364] duration metric: took 111.491µs to acquireMachinesLock for "ha-326307-m04"
	I0919 22:37:29.801723   85805 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0919 22:37:29.801859   85805 start.go:125] createHost starting for "m04" (driver="docker")
	I0919 22:37:29.804209   85805 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:37:29.804318   85805 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:37:29.804349   85805 client.go:168] LocalClient.Create starting
	I0919 22:37:29.804460   85805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:37:29.804510   85805 main.go:141] libmachine: Decoding PEM data...
	I0919 22:37:29.804525   85805 main.go:141] libmachine: Parsing certificate...
	I0919 22:37:29.804587   85805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:37:29.804607   85805 main.go:141] libmachine: Decoding PEM data...
	I0919 22:37:29.804618   85805 main.go:141] libmachine: Parsing certificate...
	I0919 22:37:29.804845   85805 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:37:29.823827   85805 network_create.go:77] Found existing network {name:ha-326307 subnet:0xc001428180 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:37:29.823899   85805 kic.go:121] calculated static IP "192.168.49.5" for the "ha-326307-m04" container
	I0919 22:37:29.824007   85805 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:37:29.843101   85805 cli_runner.go:164] Run: docker volume create ha-326307-m04 --label name.minikube.sigs.k8s.io=ha-326307-m04 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:37:29.863114   85805 oci.go:103] Successfully created a docker volume ha-326307-m04
	I0919 22:37:29.863200   85805 cli_runner.go:164] Run: docker run --rm --name ha-326307-m04-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m04 --entrypoint /usr/bin/test -v ha-326307-m04:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:37:30.258509   85805 oci.go:107] Successfully prepared a docker volume ha-326307-m04
	I0919 22:37:30.258551   85805 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:37:30.258573   85805 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:37:30.258654   85805 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:37:34.690146   85805 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.43143145s)
	I0919 22:37:34.690199   85805 kic.go:203] duration metric: took 4.431622634s to extract preloaded images to volume ...
	W0919 22:37:34.690294   85805 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:37:34.690323   85805 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:37:34.690358   85805 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:37:34.749080   85805 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307-m04 --name ha-326307-m04 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m04 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307-m04 --network ha-326307 --ip 192.168.49.5 --volume ha-326307-m04:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:37:35.063881   85805 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Running}}
	I0919 22:37:35.086364   85805 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:37:35.106237   85805 cli_runner.go:164] Run: docker exec ha-326307-m04 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:37:35.156252   85805 oci.go:144] the created container "ha-326307-m04" has a running status.
	I0919 22:37:35.156283   85805 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m04/id_rsa...
	I0919 22:37:35.645398   85805 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m04/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:37:35.645449   85805 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m04/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:37:35.683399   85805 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:37:35.702626   85805 cli_runner.go:164] Run: docker inspect ha-326307-m04
	I0919 22:37:35.720896   85805 errors.go:84] Postmortem inspect ("docker inspect ha-326307-m04"): -- stdout --
	[
	    {
	        "Id": "cc3e3303a784bbafa26768d9240cc160df5616eddae459aa5c5e1d012c42abd4",
	        "Created": "2025-09-19T22:37:34.76845513Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 255,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:37:34.813562941Z",
	            "FinishedAt": "2025-09-19T22:37:35.213613965Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/cc3e3303a784bbafa26768d9240cc160df5616eddae459aa5c5e1d012c42abd4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cc3e3303a784bbafa26768d9240cc160df5616eddae459aa5c5e1d012c42abd4/hostname",
	        "HostsPath": "/var/lib/docker/containers/cc3e3303a784bbafa26768d9240cc160df5616eddae459aa5c5e1d012c42abd4/hosts",
	        "LogPath": "/var/lib/docker/containers/cc3e3303a784bbafa26768d9240cc160df5616eddae459aa5c5e1d012c42abd4/cc3e3303a784bbafa26768d9240cc160df5616eddae459aa5c5e1d012c42abd4-json.log",
	        "Name": "/ha-326307-m04",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-326307-m04:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-326307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cc3e3303a784bbafa26768d9240cc160df5616eddae459aa5c5e1d012c42abd4",
	                "LowerDir": "/var/lib/docker/overlay2/6e393afb02beb5ca27ea0def98ec6334fcaf6081c696d7d2a67b6104b2fa576e-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e393afb02beb5ca27ea0def98ec6334fcaf6081c696d7d2a67b6104b2fa576e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e393afb02beb5ca27ea0def98ec6334fcaf6081c696d7d2a67b6104b2fa576e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e393afb02beb5ca27ea0def98ec6334fcaf6081c696d7d2a67b6104b2fa576e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-326307-m04",
	                "Source": "/var/lib/docker/volumes/ha-326307-m04/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-326307-m04",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-326307-m04",
	                "name.minikube.sigs.k8s.io": "ha-326307-m04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-326307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.5"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "465af21e2d8d112a34f14f7c3ab89eac4e6e57f582ff8e32b514381f55dd085e",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-326307-m04",
	                        "cc3e3303a784"
	                    ]
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0919 22:37:35.720975   85805 cli_runner.go:164] Run: docker logs --timestamps --details ha-326307-m04
	I0919 22:37:35.740614   85805 errors.go:91] Postmortem logs ("docker logs --timestamps --details ha-326307-m04"): -- stdout --
	2025-09-19T22:37:35.052659558Z  + userns=
	2025-09-19T22:37:35.052716395Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2025-09-19T22:37:35.055671302Z  + validate_userns
	2025-09-19T22:37:35.055711640Z  + [[ -z '' ]]
	2025-09-19T22:37:35.055714968Z  + return
	2025-09-19T22:37:35.055717779Z  + configure_containerd
	2025-09-19T22:37:35.055720527Z  + local snapshotter=
	2025-09-19T22:37:35.055723297Z  + [[ -n '' ]]
	2025-09-19T22:37:35.055725677Z  + [[ -z '' ]]
	2025-09-19T22:37:35.056208707Z  ++ stat -f -c %T /kind
	2025-09-19T22:37:35.057584007Z  + container_filesystem=overlayfs
	2025-09-19T22:37:35.057602349Z  + [[ overlayfs == \z\f\s ]]
	2025-09-19T22:37:35.057606235Z  + [[ -n '' ]]
	2025-09-19T22:37:35.057609378Z  + configure_proxy
	2025-09-19T22:37:35.057612316Z  + mkdir -p /etc/systemd/system.conf.d/
	2025-09-19T22:37:35.062790276Z  + [[ ! -z '' ]]
	2025-09-19T22:37:35.062808892Z  + cat
	2025-09-19T22:37:35.064187629Z  + fix_mount
	2025-09-19T22:37:35.064206928Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2025-09-19T22:37:35.064211050Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2025-09-19T22:37:35.064938149Z  ++ which mount
	2025-09-19T22:37:35.066692500Z  ++ which umount
	2025-09-19T22:37:35.067713449Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2025-09-19T22:37:35.078647542Z  ++ which mount
	2025-09-19T22:37:35.080061538Z  ++ which umount
	2025-09-19T22:37:35.081034209Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2025-09-19T22:37:35.082971284Z  +++ which mount
	2025-09-19T22:37:35.084234707Z  ++ stat -f -c %T /usr/bin/mount
	2025-09-19T22:37:35.085457837Z  + [[ overlayfs == \a\u\f\s ]]
	2025-09-19T22:37:35.085478327Z  + echo 'INFO: remounting /sys read-only'
	2025-09-19T22:37:35.085482742Z  INFO: remounting /sys read-only
	2025-09-19T22:37:35.085485946Z  + mount -o remount,ro /sys
	2025-09-19T22:37:35.087810460Z  + echo 'INFO: making mounts shared'
	2025-09-19T22:37:35.087832870Z  INFO: making mounts shared
	2025-09-19T22:37:35.087836508Z  + mount --make-rshared /
	2025-09-19T22:37:35.089415438Z  + retryable_fix_cgroup
	2025-09-19T22:37:35.089811512Z  ++ seq 0 10
	2025-09-19T22:37:35.090680948Z  + for i in $(seq 0 10)
	2025-09-19T22:37:35.090700025Z  + fix_cgroup
	2025-09-19T22:37:35.090704248Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2025-09-19T22:37:35.090707354Z  + echo 'INFO: detected cgroup v2'
	2025-09-19T22:37:35.090710252Z  INFO: detected cgroup v2
	2025-09-19T22:37:35.090729869Z  + return
	2025-09-19T22:37:35.090732993Z  + return
	2025-09-19T22:37:35.090762935Z  + fix_machine_id
	2025-09-19T22:37:35.090769120Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2025-09-19T22:37:35.090772401Z  INFO: clearing and regenerating /etc/machine-id
	2025-09-19T22:37:35.090775372Z  + rm -f /etc/machine-id
	2025-09-19T22:37:35.092008722Z  + systemd-machine-id-setup
	2025-09-19T22:37:35.096056942Z  Initializing machine ID from random generator.
	2025-09-19T22:37:35.098744806Z  + fix_product_name
	2025-09-19T22:37:35.098769928Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2025-09-19T22:37:35.098825435Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2025-09-19T22:37:35.098839319Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2025-09-19T22:37:35.098843696Z  + echo kind
	2025-09-19T22:37:35.100437473Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2025-09-19T22:37:35.101915118Z  + fix_product_uuid
	2025-09-19T22:37:35.101928590Z  + [[ ! -f /kind/product_uuid ]]
	2025-09-19T22:37:35.101930963Z  + cat /proc/sys/kernel/random/uuid
	2025-09-19T22:37:35.103277310Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2025-09-19T22:37:35.103289843Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2025-09-19T22:37:35.103292265Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2025-09-19T22:37:35.103294214Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2025-09-19T22:37:35.104883398Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2025-09-19T22:37:35.104911447Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2025-09-19T22:37:35.104915797Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2025-09-19T22:37:35.104919033Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2025-09-19T22:37:35.106470770Z  + select_iptables
	2025-09-19T22:37:35.106488309Z  + local mode num_legacy_lines num_nft_lines
	2025-09-19T22:37:35.107411429Z  ++ grep -c '^-'
	2025-09-19T22:37:35.110125058Z  ++ true
	2025-09-19T22:37:35.110381779Z  + num_legacy_lines=0
	2025-09-19T22:37:35.111288094Z  ++ grep -c '^-'
	2025-09-19T22:37:35.117938283Z  + num_nft_lines=6
	2025-09-19T22:37:35.117962007Z  + '[' 0 -ge 6 ']'
	2025-09-19T22:37:35.117964573Z  + mode=nft
	2025-09-19T22:37:35.117966878Z  + echo 'INFO: setting iptables to detected mode: nft'
	2025-09-19T22:37:35.117968833Z  INFO: setting iptables to detected mode: nft
	2025-09-19T22:37:35.117970672Z  + update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-19T22:37:35.117983479Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-nft'
	2025-09-19T22:37:35.117985307Z  + local 'args=--set iptables /usr/sbin/iptables-nft'
	2025-09-19T22:37:35.118498527Z  ++ seq 0 15
	2025-09-19T22:37:35.119271208Z  + for i in $(seq 0 15)
	2025-09-19T22:37:35.119287854Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-19T22:37:35.122941610Z  + return
	2025-09-19T22:37:35.122966370Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-19T22:37:35.123018172Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-19T22:37:35.123035569Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-19T22:37:35.123508993Z  ++ seq 0 15
	2025-09-19T22:37:35.124800817Z  + for i in $(seq 0 15)
	2025-09-19T22:37:35.124816689Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-19T22:37:35.128102960Z  + return
	2025-09-19T22:37:35.128186375Z  + enable_network_magic
	2025-09-19T22:37:35.128260630Z  + local docker_embedded_dns_ip=127.0.0.11
	2025-09-19T22:37:35.128270527Z  + local docker_host_ip
	2025-09-19T22:37:35.129842446Z  ++ cut '-d ' -f1
	2025-09-19T22:37:35.129860742Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:37:35.129920580Z  +++ timeout 5 getent ahostsv4 host.docker.internal
	2025-09-19T22:37:35.169061752Z  + docker_host_ip=
	2025-09-19T22:37:35.169097731Z  + [[ -z '' ]]
	2025-09-19T22:37:35.169809873Z  ++ ip -4 route show default
	2025-09-19T22:37:35.169842896Z  ++ cut '-d ' -f3
	2025-09-19T22:37:35.171984015Z  + docker_host_ip=192.168.49.1
	2025-09-19T22:37:35.172341816Z  + iptables-save
	2025-09-19T22:37:35.172711135Z  + iptables-restore
	2025-09-19T22:37:35.175139581Z  + sed -e 's/-d 127.0.0.11/-d 192.168.49.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.49.1:53/g' -e 's/p -j DNAT --to-destination 127.0.0.11/p --dport 53 -j DNAT --to-destination 127.0.0.11/g'
	2025-09-19T22:37:35.185850212Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2025-09-19T22:37:35.187973245Z  ++ sed -e s/127.0.0.11/192.168.49.1/g /etc/resolv.conf.original
	2025-09-19T22:37:35.189320025Z  + replaced='# Generated by Docker Engine.
	2025-09-19T22:37:35.189339152Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-19T22:37:35.189342629Z  # has been modified.
	2025-09-19T22:37:35.189345168Z  
	2025-09-19T22:37:35.189347538Z  nameserver 192.168.49.1
	2025-09-19T22:37:35.189350640Z  search local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-19T22:37:35.189353748Z  options edns0 trust-ad ndots:0
	2025-09-19T22:37:35.189370343Z  
	2025-09-19T22:37:35.189373440Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-19T22:37:35.189376381Z  # ExtServers: [host(127.0.0.53)]
	2025-09-19T22:37:35.189379225Z  # Overrides: []
	2025-09-19T22:37:35.189381988Z  # Option ndots from: internal'
	2025-09-19T22:37:35.189384854Z  + [[ '' == '' ]]
	2025-09-19T22:37:35.189387441Z  + echo '# Generated by Docker Engine.
	2025-09-19T22:37:35.189390288Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-19T22:37:35.189393173Z  # has been modified.
	2025-09-19T22:37:35.189395915Z  
	2025-09-19T22:37:35.189398658Z  nameserver 192.168.49.1
	2025-09-19T22:37:35.189401426Z  search local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-19T22:37:35.189404478Z  options edns0 trust-ad ndots:0
	2025-09-19T22:37:35.189416706Z  
	2025-09-19T22:37:35.189419321Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-19T22:37:35.189422311Z  # ExtServers: [host(127.0.0.53)]
	2025-09-19T22:37:35.189425143Z  # Overrides: []
	2025-09-19T22:37:35.189427911Z  # Option ndots from: internal'
	2025-09-19T22:37:35.189687226Z  + files_to_update=('/etc/kubernetes/manifests/etcd.yaml' '/etc/kubernetes/manifests/kube-apiserver.yaml' '/etc/kubernetes/manifests/kube-controller-manager.yaml' '/etc/kubernetes/manifests/kube-scheduler.yaml' '/etc/kubernetes/controller-manager.conf' '/etc/kubernetes/scheduler.conf' '/kind/kubeadm.conf' '/var/lib/kubelet/kubeadm-flags.env')
	2025-09-19T22:37:35.189698946Z  + local files_to_update
	2025-09-19T22:37:35.189701748Z  + local should_fix_certificate=false
	2025-09-19T22:37:35.190960200Z  ++ cut '-d ' -f1
	2025-09-19T22:37:35.190990650Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:37:35.191545889Z  ++++ hostname
	2025-09-19T22:37:35.192417556Z  +++ timeout 5 getent ahostsv4 ha-326307-m04
	2025-09-19T22:37:35.195519956Z  + curr_ipv4=192.168.49.5
	2025-09-19T22:37:35.195536742Z  + echo 'INFO: Detected IPv4 address: 192.168.49.5'
	2025-09-19T22:37:35.195539016Z  INFO: Detected IPv4 address: 192.168.49.5
	2025-09-19T22:37:35.195540828Z  + '[' -f /kind/old-ipv4 ']'
	2025-09-19T22:37:35.195542562Z  + [[ -n 192.168.49.5 ]]
	2025-09-19T22:37:35.195544373Z  + echo -n 192.168.49.5
	2025-09-19T22:37:35.196858648Z  ++ cut '-d ' -f1
	2025-09-19T22:37:35.196944368Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:37:35.197642125Z  ++++ hostname
	2025-09-19T22:37:35.198517263Z  +++ timeout 5 getent ahostsv6 ha-326307-m04
	2025-09-19T22:37:35.201637710Z  + curr_ipv6=
	2025-09-19T22:37:35.201659671Z  + echo 'INFO: Detected IPv6 address: '
	2025-09-19T22:37:35.201694353Z  INFO: Detected IPv6 address: 
	2025-09-19T22:37:35.201698084Z  + '[' -f /kind/old-ipv6 ']'
	2025-09-19T22:37:35.201702711Z  + [[ -n '' ]]
	2025-09-19T22:37:35.201705550Z  + false
	2025-09-19T22:37:35.202368399Z  ++ uname -a
	2025-09-19T22:37:35.203515181Z  + echo 'entrypoint completed: Linux ha-326307-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux'
	2025-09-19T22:37:35.203530094Z  entrypoint completed: Linux ha-326307-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	2025-09-19T22:37:35.203533760Z  + exec /sbin/init
	2025-09-19T22:37:35.210332231Z  systemd 249.11-0ubuntu3.16 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
	2025-09-19T22:37:35.210363126Z  Detected virtualization docker.
	2025-09-19T22:37:35.210366684Z  Detected architecture x86-64.
	2025-09-19T22:37:35.210460017Z  
	2025-09-19T22:37:35.210475085Z  Welcome to Ubuntu 22.04.5 LTS!
	2025-09-19T22:37:35.210478265Z  
	2025-09-19T22:37:35.210937405Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:37:35.210949307Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:37:35.210953036Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:37:35.210956448Z  Exiting PID 1...
	
	-- /stdout --
	I0919 22:37:35.740715   85805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:37:35.801905   85805 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 22:37:35.790822994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:37:35.801975   85805 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 22:37:35.790822994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux A
rchitecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:fals
e Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:37:35.802070   85805 network_create.go:284] running [docker network inspect ha-326307-m04] to gather additional debugging logs...
	I0919 22:37:35.802089   85805 cli_runner.go:164] Run: docker network inspect ha-326307-m04
	W0919 22:37:35.820075   85805 cli_runner.go:211] docker network inspect ha-326307-m04 returned with exit code 1
	I0919 22:37:35.820105   85805 network_create.go:287] error running [docker network inspect ha-326307-m04]: docker network inspect ha-326307-m04: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-326307-m04 not found
	I0919 22:37:35.820118   85805 network_create.go:289] output of [docker network inspect ha-326307-m04]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-326307-m04 not found
	
	** /stderr **
	I0919 22:37:35.820203   85805 client.go:171] duration metric: took 6.015842942s to LocalClient.Create
	I0919 22:37:37.820432   85805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:37:37.820479   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:37.841717   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:37:37.841861   85805 retry.go:31] will retry after 274.759813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:38.117378   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:38.135835   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:37:38.135935   85805 retry.go:31] will retry after 477.515217ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:38.614389   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:38.633542   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:37:38.633684   85805 retry.go:31] will retry after 380.975397ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:39.015397   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:39.034596   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:37:39.034729   85805 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:37:39.034742   85805 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:39.034785   85805 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:37:39.034820   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:39.055032   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:37:39.055175   85805 retry.go:31] will retry after 353.53943ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:39.409847   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:39.429690   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:37:39.429789   85805 retry.go:31] will retry after 230.849904ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:39.661311   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:39.682412   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:37:39.682502   85805 retry.go:31] will retry after 785.31885ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:40.468339   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:40.488549   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:37:40.488667   85805 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:37:40.488682   85805 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:40.488699   85805 start.go:128] duration metric: took 10.686832175s to createHost
	I0919 22:37:40.488708   85805 start.go:83] releasing machines lock for "ha-326307-m04", held for 10.687009163s
	W0919 22:37:40.488726   85805 start.go:714] error starting host: creating host: create: creating: prepare kic ssh: container name "ha-326307-m04" state Stopped: log: 2025-09-19T22:37:35.210937405Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:37:35.210949307Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:37:35.210953036Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:37:35.210956448Z  Exiting PID 1...: container exited unexpectedly
	I0919 22:37:40.489123   85805 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:37:40.507448   85805 stop.go:39] StopHost: ha-326307-m04
	W0919 22:37:40.507728   85805 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0919 22:37:40.510250   85805 out.go:179] * Stopping node "ha-326307-m04"  ...
	I0919 22:37:40.511697   85805 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:37:40.531206   85805 stop.go:87] host is in state Stopped
	I0919 22:37:40.531302   85805 main.go:141] libmachine: Stopping "ha-326307-m04"...
	I0919 22:37:40.531395   85805 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:37:40.550196   85805 stop.go:66] stop err: Machine "ha-326307-m04" is already stopped.
	I0919 22:37:40.550245   85805 stop.go:69] host is already stopped
	W0919 22:37:41.550404   85805 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0919 22:37:41.552574   85805 out.go:179] * Deleting "ha-326307-m04" in docker ...
	I0919 22:37:41.553920   85805 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-326307-m04
	I0919 22:37:41.572790   85805 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:37:41.592604   85805 cli_runner.go:164] Run: docker exec --privileged -t ha-326307-m04 /bin/bash -c "sudo init 0"
	W0919 22:37:41.611793   85805 cli_runner.go:211] docker exec --privileged -t ha-326307-m04 /bin/bash -c "sudo init 0" returned with exit code 1
	I0919 22:37:41.611827   85805 oci.go:659] error shutdown ha-326307-m04: docker exec --privileged -t ha-326307-m04 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container cc3e3303a784bbafa26768d9240cc160df5616eddae459aa5c5e1d012c42abd4 is not running
	I0919 22:37:42.612045   85805 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:37:42.632127   85805 oci.go:667] container ha-326307-m04 status is Stopped
	I0919 22:37:42.632176   85805 oci.go:679] Successfully shutdown container ha-326307-m04
	I0919 22:37:42.632232   85805 cli_runner.go:164] Run: docker rm -f -v ha-326307-m04
	I0919 22:37:42.660106   85805 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-326307-m04
	W0919 22:37:42.678987   85805 cli_runner.go:211] docker container inspect -f {{.Id}} ha-326307-m04 returned with exit code 1
	I0919 22:37:42.679061   85805 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:37:42.699361   85805 cli_runner.go:164] Run: docker network rm ha-326307
	W0919 22:37:42.718000   85805 cli_runner.go:211] docker network rm ha-326307 returned with exit code 1
	W0919 22:37:42.718111   85805 kic.go:390] failed to remove network (which might be okay) ha-326307: unable to delete a network that is attached to a running container
	W0919 22:37:42.718350   85805 out.go:285] ! StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: container name "ha-326307-m04" state Stopped: log: 2025-09-19T22:37:35.210937405Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:37:35.210949307Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:37:35.210953036Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:37:35.210956448Z  Exiting PID 1...: container exited unexpectedly
	! StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: container name "ha-326307-m04" state Stopped: log: 2025-09-19T22:37:35.210937405Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:37:35.210949307Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:37:35.210953036Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:37:35.210956448Z  Exiting PID 1...: container exited unexpectedly
	I0919 22:37:42.718383   85805 start.go:729] Will try again in 5 seconds ...
	I0919 22:37:47.720282   85805 start.go:360] acquireMachinesLock for ha-326307-m04: {Name:mk65e4f546dae46b7c7a6cfe6f590e09a0a01676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:37:47.720372   85805 start.go:364] duration metric: took 57.283µs to acquireMachinesLock for "ha-326307-m04"
	I0919 22:37:47.720410   85805 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0919 22:37:47.720533   85805 start.go:125] createHost starting for "m04" (driver="docker")
	I0919 22:37:47.722322   85805 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:37:47.722456   85805 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:37:47.722484   85805 client.go:168] LocalClient.Create starting
	I0919 22:37:47.722539   85805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:37:47.722570   85805 main.go:141] libmachine: Decoding PEM data...
	I0919 22:37:47.722583   85805 main.go:141] libmachine: Parsing certificate...
	I0919 22:37:47.722642   85805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:37:47.722663   85805 main.go:141] libmachine: Decoding PEM data...
	I0919 22:37:47.722673   85805 main.go:141] libmachine: Parsing certificate...
	I0919 22:37:47.722884   85805 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:37:47.742266   85805 network_create.go:77] Found existing network {name:ha-326307 subnet:0xc001805470 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:37:47.742306   85805 kic.go:121] calculated static IP "192.168.49.5" for the "ha-326307-m04" container
	I0919 22:37:47.742358   85805 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:37:47.763237   85805 cli_runner.go:164] Run: docker volume create ha-326307-m04 --label name.minikube.sigs.k8s.io=ha-326307-m04 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:37:47.782116   85805 oci.go:103] Successfully created a docker volume ha-326307-m04
	I0919 22:37:47.782230   85805 cli_runner.go:164] Run: docker run --rm --name ha-326307-m04-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m04 --entrypoint /usr/bin/test -v ha-326307-m04:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:37:48.069099   85805 oci.go:107] Successfully prepared a docker volume ha-326307-m04
	I0919 22:37:48.069144   85805 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:37:48.069206   85805 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:37:48.069261   85805 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:37:52.595244   85805 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.525923961s)
	I0919 22:37:52.595281   85805 kic.go:203] duration metric: took 4.526071055s to extract preloaded images to volume ...
	W0919 22:37:52.595396   85805 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:37:52.595436   85805 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:37:52.595514   85805 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:37:52.652312   85805 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307-m04 --name ha-326307-m04 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m04 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307-m04 --network ha-326307 --ip 192.168.49.5 --volume ha-326307-m04:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:37:52.944534   85805 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Running}}
	I0919 22:37:52.965723   85805 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:37:52.986729   85805 cli_runner.go:164] Run: docker exec ha-326307-m04 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:37:53.037675   85805 oci.go:144] the created container "ha-326307-m04" has a running status.
	I0919 22:37:53.037702   85805 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m04/id_rsa...
	I0919 22:37:53.124770   85805 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m04/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:37:53.124823   85805 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m04/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:37:53.421326   85805 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:37:53.440995   85805 cli_runner.go:164] Run: docker inspect ha-326307-m04
	I0919 22:37:53.459691   85805 errors.go:84] Postmortem inspect ("docker inspect ha-326307-m04"): -- stdout --
	[
	    {
	        "Id": "3a48ce33cd886fbc6d1d0a224a9327fb96a4486e3357a7c98463859471e79909",
	        "Created": "2025-09-19T22:37:52.670918572Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 255,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:37:52.712030818Z",
	            "FinishedAt": "2025-09-19T22:37:53.085958465Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3a48ce33cd886fbc6d1d0a224a9327fb96a4486e3357a7c98463859471e79909/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a48ce33cd886fbc6d1d0a224a9327fb96a4486e3357a7c98463859471e79909/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a48ce33cd886fbc6d1d0a224a9327fb96a4486e3357a7c98463859471e79909/hosts",
	        "LogPath": "/var/lib/docker/containers/3a48ce33cd886fbc6d1d0a224a9327fb96a4486e3357a7c98463859471e79909/3a48ce33cd886fbc6d1d0a224a9327fb96a4486e3357a7c98463859471e79909-json.log",
	        "Name": "/ha-326307-m04",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-326307-m04:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-326307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3a48ce33cd886fbc6d1d0a224a9327fb96a4486e3357a7c98463859471e79909",
	                "LowerDir": "/var/lib/docker/overlay2/d0b476404bb32d918d18e41a89c0cba8856cadd2d1f2f517f2dde009393c8e0b-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d0b476404bb32d918d18e41a89c0cba8856cadd2d1f2f517f2dde009393c8e0b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d0b476404bb32d918d18e41a89c0cba8856cadd2d1f2f517f2dde009393c8e0b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d0b476404bb32d918d18e41a89c0cba8856cadd2d1f2f517f2dde009393c8e0b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-326307-m04",
	                "Source": "/var/lib/docker/volumes/ha-326307-m04/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-326307-m04",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-326307-m04",
	                "name.minikube.sigs.k8s.io": "ha-326307-m04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-326307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.5"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "465af21e2d8d112a34f14f7c3ab89eac4e6e57f582ff8e32b514381f55dd085e",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-326307-m04",
	                        "3a48ce33cd88"
	                    ]
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0919 22:37:53.459770   85805 cli_runner.go:164] Run: docker logs --timestamps --details ha-326307-m04
	I0919 22:37:53.481181   85805 errors.go:91] Postmortem logs ("docker logs --timestamps --details ha-326307-m04"): -- stdout --
	2025-09-19T22:37:52.936869699Z  + userns=
	2025-09-19T22:37:52.936909677Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2025-09-19T22:37:52.939815500Z  + validate_userns
	2025-09-19T22:37:52.939837838Z  + [[ -z '' ]]
	2025-09-19T22:37:52.939840526Z  + return
	2025-09-19T22:37:52.939842343Z  + configure_containerd
	2025-09-19T22:37:52.939844027Z  + local snapshotter=
	2025-09-19T22:37:52.939845680Z  + [[ -n '' ]]
	2025-09-19T22:37:52.939847267Z  + [[ -z '' ]]
	2025-09-19T22:37:52.940352975Z  ++ stat -f -c %T /kind
	2025-09-19T22:37:52.941663430Z  + container_filesystem=overlayfs
	2025-09-19T22:37:52.941682185Z  + [[ overlayfs == \z\f\s ]]
	2025-09-19T22:37:52.941685893Z  + [[ -n '' ]]
	2025-09-19T22:37:52.941789565Z  + configure_proxy
	2025-09-19T22:37:52.941802338Z  + mkdir -p /etc/systemd/system.conf.d/
	2025-09-19T22:37:52.946309757Z  + [[ ! -z '' ]]
	2025-09-19T22:37:52.946333632Z  + cat
	2025-09-19T22:37:52.947981849Z  + fix_mount
	2025-09-19T22:37:52.948001649Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2025-09-19T22:37:52.948004420Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2025-09-19T22:37:52.948588722Z  ++ which mount
	2025-09-19T22:37:52.950783504Z  ++ which umount
	2025-09-19T22:37:52.952320521Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2025-09-19T22:37:52.960050810Z  ++ which mount
	2025-09-19T22:37:52.962230402Z  ++ which umount
	2025-09-19T22:37:52.963492374Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2025-09-19T22:37:52.965666057Z  +++ which mount
	2025-09-19T22:37:52.966891132Z  ++ stat -f -c %T /usr/bin/mount
	2025-09-19T22:37:52.968056788Z  + [[ overlayfs == \a\u\f\s ]]
	2025-09-19T22:37:52.968072673Z  + echo 'INFO: remounting /sys read-only'
	2025-09-19T22:37:52.968075155Z  INFO: remounting /sys read-only
	2025-09-19T22:37:52.968076952Z  + mount -o remount,ro /sys
	2025-09-19T22:37:52.970064173Z  + echo 'INFO: making mounts shared'
	2025-09-19T22:37:52.970084809Z  INFO: making mounts shared
	2025-09-19T22:37:52.970088346Z  + mount --make-rshared /
	2025-09-19T22:37:52.971674213Z  + retryable_fix_cgroup
	2025-09-19T22:37:52.972112259Z  ++ seq 0 10
	2025-09-19T22:37:52.973021099Z  + for i in $(seq 0 10)
	2025-09-19T22:37:52.973040566Z  + fix_cgroup
	2025-09-19T22:37:52.973097011Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2025-09-19T22:37:52.973109019Z  + echo 'INFO: detected cgroup v2'
	2025-09-19T22:37:52.973111797Z  INFO: detected cgroup v2
	2025-09-19T22:37:52.973129552Z  + return
	2025-09-19T22:37:52.973132614Z  + return
	2025-09-19T22:37:52.973138668Z  + fix_machine_id
	2025-09-19T22:37:52.973141449Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2025-09-19T22:37:52.973172674Z  INFO: clearing and regenerating /etc/machine-id
	2025-09-19T22:37:52.973224082Z  + rm -f /etc/machine-id
	2025-09-19T22:37:52.974564152Z  + systemd-machine-id-setup
	2025-09-19T22:37:52.978894779Z  Initializing machine ID from random generator.
	2025-09-19T22:37:52.983510639Z  + fix_product_name
	2025-09-19T22:37:52.983538239Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2025-09-19T22:37:52.983560812Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2025-09-19T22:37:52.983566667Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2025-09-19T22:37:52.983569602Z  + echo kind
	2025-09-19T22:37:52.985326478Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2025-09-19T22:37:52.987311969Z  + fix_product_uuid
	2025-09-19T22:37:52.987333188Z  + [[ ! -f /kind/product_uuid ]]
	2025-09-19T22:37:52.987337904Z  + cat /proc/sys/kernel/random/uuid
	2025-09-19T22:37:52.988699713Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2025-09-19T22:37:52.988734350Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2025-09-19T22:37:52.988738194Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2025-09-19T22:37:52.988741620Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2025-09-19T22:37:52.990483680Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2025-09-19T22:37:52.990504602Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2025-09-19T22:37:52.990508218Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2025-09-19T22:37:52.990511855Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2025-09-19T22:37:52.992063039Z  + select_iptables
	2025-09-19T22:37:52.992077774Z  + local mode num_legacy_lines num_nft_lines
	2025-09-19T22:37:52.993118827Z  ++ grep -c '^-'
	2025-09-19T22:37:52.996203453Z  ++ true
	2025-09-19T22:37:52.996582924Z  + num_legacy_lines=0
	2025-09-19T22:37:52.997570056Z  ++ grep -c '^-'
	2025-09-19T22:37:53.003999477Z  + num_nft_lines=6
	2025-09-19T22:37:53.004026369Z  + '[' 0 -ge 6 ']'
	2025-09-19T22:37:53.004030625Z  + mode=nft
	2025-09-19T22:37:53.004033299Z  + echo 'INFO: setting iptables to detected mode: nft'
	2025-09-19T22:37:53.004036012Z  INFO: setting iptables to detected mode: nft
	2025-09-19T22:37:53.004113332Z  + update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-19T22:37:53.004140279Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-nft'
	2025-09-19T22:37:53.004143975Z  + local 'args=--set iptables /usr/sbin/iptables-nft'
	2025-09-19T22:37:53.004721665Z  ++ seq 0 15
	2025-09-19T22:37:53.005614828Z  + for i in $(seq 0 15)
	2025-09-19T22:37:53.005629826Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-19T22:37:53.006938804Z  + return
	2025-09-19T22:37:53.006961497Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-19T22:37:53.007079285Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-19T22:37:53.007084854Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-19T22:37:53.007562288Z  ++ seq 0 15
	2025-09-19T22:37:53.008597432Z  + for i in $(seq 0 15)
	2025-09-19T22:37:53.008614703Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-19T22:37:53.009826684Z  + return
	2025-09-19T22:37:53.009842803Z  + enable_network_magic
	2025-09-19T22:37:53.010006590Z  + local docker_embedded_dns_ip=127.0.0.11
	2025-09-19T22:37:53.010017830Z  + local docker_host_ip
	2025-09-19T22:37:53.011365196Z  ++ cut '-d ' -f1
	2025-09-19T22:37:53.011382825Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:37:53.011608718Z  +++ timeout 5 getent ahostsv4 host.docker.internal
	2025-09-19T22:37:53.046866100Z  + docker_host_ip=
	2025-09-19T22:37:53.046889662Z  + [[ -z '' ]]
	2025-09-19T22:37:53.047523448Z  ++ ip -4 route show default
	2025-09-19T22:37:53.047651936Z  ++ cut '-d ' -f3
	2025-09-19T22:37:53.049842154Z  + docker_host_ip=192.168.49.1
	2025-09-19T22:37:53.050135565Z  + iptables-save
	2025-09-19T22:37:53.050573573Z  + iptables-restore
	2025-09-19T22:37:53.053522081Z  + sed -e 's/-d 127.0.0.11/-d 192.168.49.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.49.1:53/g' -e 's/p -j DNAT --to-destination 127.0.0.11/p --dport 53 -j DNAT --to-destination 127.0.0.11/g'
	2025-09-19T22:37:53.059704181Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2025-09-19T22:37:53.061676971Z  ++ sed -e s/127.0.0.11/192.168.49.1/g /etc/resolv.conf.original
	2025-09-19T22:37:53.062948346Z  + replaced='# Generated by Docker Engine.
	2025-09-19T22:37:53.062966177Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-19T22:37:53.062969836Z  # has been modified.
	2025-09-19T22:37:53.062971730Z  
	2025-09-19T22:37:53.062973453Z  nameserver 192.168.49.1
	2025-09-19T22:37:53.062975323Z  search local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-19T22:37:53.062977144Z  options edns0 trust-ad ndots:0
	2025-09-19T22:37:53.062988870Z  
	2025-09-19T22:37:53.062990624Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-19T22:37:53.062992398Z  # ExtServers: [host(127.0.0.53)]
	2025-09-19T22:37:53.062994065Z  # Overrides: []
	2025-09-19T22:37:53.062995723Z  # Option ndots from: internal'
	2025-09-19T22:37:53.062997286Z  + [[ '' == '' ]]
	2025-09-19T22:37:53.062998849Z  + echo '# Generated by Docker Engine.
	2025-09-19T22:37:53.063000481Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-19T22:37:53.063002227Z  # has been modified.
	2025-09-19T22:37:53.063003768Z  
	2025-09-19T22:37:53.063005245Z  nameserver 192.168.49.1
	2025-09-19T22:37:53.063006877Z  search local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-19T22:37:53.063008606Z  options edns0 trust-ad ndots:0
	2025-09-19T22:37:53.063010178Z  
	2025-09-19T22:37:53.063011674Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-19T22:37:53.063013458Z  # ExtServers: [host(127.0.0.53)]
	2025-09-19T22:37:53.063015048Z  # Overrides: []
	2025-09-19T22:37:53.063016610Z  # Option ndots from: internal'
	2025-09-19T22:37:53.063094882Z  + files_to_update=('/etc/kubernetes/manifests/etcd.yaml' '/etc/kubernetes/manifests/kube-apiserver.yaml' '/etc/kubernetes/manifests/kube-controller-manager.yaml' '/etc/kubernetes/manifests/kube-scheduler.yaml' '/etc/kubernetes/controller-manager.conf' '/etc/kubernetes/scheduler.conf' '/kind/kubeadm.conf' '/var/lib/kubelet/kubeadm-flags.env')
	2025-09-19T22:37:53.063109101Z  + local files_to_update
	2025-09-19T22:37:53.063112149Z  + local should_fix_certificate=false
	2025-09-19T22:37:53.064250759Z  ++ cut '-d ' -f1
	2025-09-19T22:37:53.064357020Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:37:53.064835107Z  ++++ hostname
	2025-09-19T22:37:53.065781461Z  +++ timeout 5 getent ahostsv4 ha-326307-m04
	2025-09-19T22:37:53.068623705Z  + curr_ipv4=192.168.49.5
	2025-09-19T22:37:53.068644459Z  + echo 'INFO: Detected IPv4 address: 192.168.49.5'
	2025-09-19T22:37:53.068647786Z  INFO: Detected IPv4 address: 192.168.49.5
	2025-09-19T22:37:53.068650671Z  + '[' -f /kind/old-ipv4 ']'
	2025-09-19T22:37:53.068653378Z  + [[ -n 192.168.49.5 ]]
	2025-09-19T22:37:53.068656078Z  + echo -n 192.168.49.5
	2025-09-19T22:37:53.069956901Z  ++ cut '-d ' -f1
	2025-09-19T22:37:53.069976675Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:37:53.070576473Z  ++++ hostname
	2025-09-19T22:37:53.071344513Z  +++ timeout 5 getent ahostsv6 ha-326307-m04
	2025-09-19T22:37:53.074063822Z  + curr_ipv6=
	2025-09-19T22:37:53.074084663Z  + echo 'INFO: Detected IPv6 address: '
	2025-09-19T22:37:53.074099101Z  INFO: Detected IPv6 address: 
	2025-09-19T22:37:53.074101265Z  + '[' -f /kind/old-ipv6 ']'
	2025-09-19T22:37:53.074102991Z  + [[ -n '' ]]
	2025-09-19T22:37:53.074104652Z  + false
	2025-09-19T22:37:53.074632593Z  ++ uname -a
	2025-09-19T22:37:53.075592267Z  + echo 'entrypoint completed: Linux ha-326307-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux'
	2025-09-19T22:37:53.075610715Z  entrypoint completed: Linux ha-326307-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	2025-09-19T22:37:53.075614429Z  + exec /sbin/init
	2025-09-19T22:37:53.082920439Z  systemd 249.11-0ubuntu3.16 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
	2025-09-19T22:37:53.082947442Z  Detected virtualization docker.
	2025-09-19T22:37:53.082951592Z  Detected architecture x86-64.
	2025-09-19T22:37:53.082954607Z  
	2025-09-19T22:37:53.082957369Z  Welcome to Ubuntu 22.04.5 LTS!
	2025-09-19T22:37:53.082960675Z  
	2025-09-19T22:37:53.083329395Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:37:53.083350065Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:37:53.083353988Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:37:53.083357198Z  Exiting PID 1...
	
	-- /stdout --
	I0919 22:37:53.481284   85805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:37:53.541322   85805 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 22:37:53.530238496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:37:53.541401   85805 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 22:37:53.530238496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux A
rchitecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:fals
e Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:37:53.541471   85805 network_create.go:284] running [docker network inspect ha-326307-m04] to gather additional debugging logs...
	I0919 22:37:53.541492   85805 cli_runner.go:164] Run: docker network inspect ha-326307-m04
	W0919 22:37:53.560471   85805 cli_runner.go:211] docker network inspect ha-326307-m04 returned with exit code 1
	I0919 22:37:53.560497   85805 network_create.go:287] error running [docker network inspect ha-326307-m04]: docker network inspect ha-326307-m04: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-326307-m04 not found
	I0919 22:37:53.560511   85805 network_create.go:289] output of [docker network inspect ha-326307-m04]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-326307-m04 not found
	
	** /stderr **
	I0919 22:37:53.560564   85805 client.go:171] duration metric: took 5.838073234s to LocalClient.Create
	I0919 22:37:55.561419   85805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:37:55.561478   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:55.582093   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:37:55.582245   85805 retry.go:31] will retry after 199.732158ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:55.782783   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:55.801356   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:37:55.801459   85805 retry.go:31] will retry after 195.053902ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:55.996838   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:56.016585   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:37:56.016698   85805 retry.go:31] will retry after 499.493017ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:56.516370   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:56.535112   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:37:56.535245   85805 retry.go:31] will retry after 645.500013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:57.181127   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:57.199817   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:37:57.199913   85805 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:37:57.199926   85805 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:57.199971   85805 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:37:57.199999   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:57.218953   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:37:57.219086   85805 retry.go:31] will retry after 141.364456ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:57.361436   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:57.382839   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:37:57.382933   85805 retry.go:31] will retry after 397.521585ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:57.781625   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:57.801200   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:37:57.801304   85805 retry.go:31] will retry after 498.135788ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:58.300033   85805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:37:58.319184   85805 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:37:58.319324   85805 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:37:58.319339   85805 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:37:58.319349   85805 start.go:128] duration metric: took 10.598810504s to createHost
	I0919 22:37:58.319356   85805 start.go:83] releasing machines lock for "ha-326307-m04", held for 10.598974365s
	W0919 22:37:58.319435   85805 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-326307" may fix it: creating host: create: creating: prepare kic ssh: container name "ha-326307-m04" state Stopped: log: 2025-09-19T22:37:53.083329395Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:37:53.083350065Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:37:53.083353988Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:37:53.083357198Z  Exiting PID 1...: container exited unexpectedly
	* Failed to start docker container. Running "minikube delete -p ha-326307" may fix it: creating host: create: creating: prepare kic ssh: container name "ha-326307-m04" state Stopped: log: 2025-09-19T22:37:53.083329395Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:37:53.083350065Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:37:53.083353988Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:37:53.083357198Z  Exiting PID 1...: container exited unexpectedly
	I0919 22:37:58.321876   85805 out.go:203] 
	W0919 22:37:58.323177   85805 out.go:285] X Exiting due to GUEST_PROVISION_EXIT_UNEXPECTED: Failed to start host: creating host: create: creating: prepare kic ssh: container name "ha-326307-m04" state Stopped: log: 2025-09-19T22:37:53.083329395Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:37:53.083350065Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:37:53.083353988Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:37:53.083357198Z  Exiting PID 1...: container exited unexpectedly
	X Exiting due to GUEST_PROVISION_EXIT_UNEXPECTED: Failed to start host: creating host: create: creating: prepare kic ssh: container name "ha-326307-m04" state Stopped: log: 2025-09-19T22:37:53.083329395Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:37:53.083350065Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:37:53.083353988Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:37:53.083357198Z  Exiting PID 1...: container exited unexpectedly
	I0919 22:37:58.325937   85805 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-326307 node add --alsologtostderr -v 5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-326307
helpers_test.go:243: (dbg) docker inspect ha-326307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	        "Created": "2025-09-19T22:23:18.619000062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 69921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:23:18.670514121Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hosts",
	        "LogPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3-json.log",
	        "Name": "/ha-326307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-326307:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-326307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	                "LowerDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-326307",
	                "Source": "/var/lib/docker/volumes/ha-326307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-326307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-326307",
	                "name.minikube.sigs.k8s.io": "ha-326307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b9c61cd0152986e2b265b3cf0a7628b1c049e495ce30493b8e54f6b9446115f",
	            "SandboxKey": "/var/run/docker/netns/8b9c61cd0152",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-326307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:80:09:d2:65:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "465af21e2d8d112a34f14f7c3ab89eac4e6e57f582ff8e32b514381f55dd085e",
	                    "EndpointID": "f35735061c65841c2c1ba7f2859db25885582588fa8f2d14e3a015320f6c3fc4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-326307",
	                        "5e0f1fe86b08"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-326307 -n ha-326307
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-326307 logs -n 25: (1.217749209s)
helpers_test.go:260: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- nslookup kubernetes.io                                              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │                     │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- nslookup kubernetes.io                                              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- nslookup kubernetes.io                                              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- nslookup kubernetes.default                                         │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │                     │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- nslookup kubernetes.default                                         │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- nslookup kubernetes.default                                         │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- nslookup kubernetes.default.svc.cluster.local                       │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │                     │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- nslookup kubernetes.default.svc.cluster.local                       │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- nslookup kubernetes.default.svc.cluster.local                       │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-jdczt -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │                     │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-m8swj -- sh -c ping -c 1 192.168.49.1                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ kubectl │ ha-326307 kubectl -- exec busybox-7b57f96db7-tfpvf -- sh -c ping -c 1 192.168.49.1                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:37 UTC │
	│ node    │ ha-326307 node add --alsologtostderr -v 5                                                                                 │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:23:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:23:13.527478   69358 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:23:13.527574   69358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:13.527579   69358 out.go:374] Setting ErrFile to fd 2...
	I0919 22:23:13.527586   69358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:13.527823   69358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:23:13.528355   69358 out.go:368] Setting JSON to false
	I0919 22:23:13.529260   69358 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3938,"bootTime":1758316656,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:23:13.529345   69358 start.go:140] virtualization: kvm guest
	I0919 22:23:13.531661   69358 out.go:179] * [ha-326307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:23:13.533198   69358 notify.go:220] Checking for updates...
	I0919 22:23:13.533231   69358 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:23:13.534827   69358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:23:13.536340   69358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:23:13.537773   69358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 22:23:13.539372   69358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:23:13.541189   69358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:23:13.542697   69358 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:23:13.568228   69358 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:23:13.568380   69358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:13.622546   69358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:23:13.612893654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:13.622646   69358 docker.go:318] overlay module found
	I0919 22:23:13.624668   69358 out.go:179] * Using the docker driver based on user configuration
	I0919 22:23:13.626116   69358 start.go:304] selected driver: docker
	I0919 22:23:13.626134   69358 start.go:918] validating driver "docker" against <nil>
	I0919 22:23:13.626147   69358 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:23:13.626725   69358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:13.684385   69358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:23:13.672811393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:13.684569   69358 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:23:13.684775   69358 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:23:13.686618   69358 out.go:179] * Using Docker driver with root privileges
	I0919 22:23:13.687924   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:13.688000   69358 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:23:13.688014   69358 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:23:13.688089   69358 start.go:348] cluster config:
	{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m
0s}
	I0919 22:23:13.689601   69358 out.go:179] * Starting "ha-326307" primary control-plane node in "ha-326307" cluster
	I0919 22:23:13.691305   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:23:13.692823   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:23:13.694304   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:13.694378   69358 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 22:23:13.694398   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:23:13.694426   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:23:13.694515   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:23:13.694533   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:23:13.694981   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:13.695014   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json: {Name:mk9e3af266bcfbabd18624d7d22535c6f1841e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:13.716737   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:23:13.716759   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:23:13.716776   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:23:13.716797   69358 start.go:360] acquireMachinesLock for ha-326307: {Name:mk42b79b90944aab63c8b37c2f94e04ca1ebec1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:23:13.716893   69358 start.go:364] duration metric: took 80.537µs to acquireMachinesLock for "ha-326307"
	I0919 22:23:13.716915   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:13.716974   69358 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:23:13.719062   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:23:13.719317   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:23:13.719352   69358 client.go:168] LocalClient.Create starting
	I0919 22:23:13.719447   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:23:13.719502   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:13.719517   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:13.719580   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:23:13.719600   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:13.719610   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:13.719933   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:23:13.737609   69358 cli_runner.go:211] docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:23:13.737699   69358 network_create.go:284] running [docker network inspect ha-326307] to gather additional debugging logs...
	I0919 22:23:13.737725   69358 cli_runner.go:164] Run: docker network inspect ha-326307
	W0919 22:23:13.755400   69358 cli_runner.go:211] docker network inspect ha-326307 returned with exit code 1
	I0919 22:23:13.755437   69358 network_create.go:287] error running [docker network inspect ha-326307]: docker network inspect ha-326307: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-326307 not found
	I0919 22:23:13.755455   69358 network_create.go:289] output of [docker network inspect ha-326307]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-326307 not found
	
	** /stderr **
	I0919 22:23:13.755563   69358 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:13.774541   69358 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018eb270}
	I0919 22:23:13.774578   69358 network_create.go:124] attempt to create docker network ha-326307 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:23:13.774619   69358 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-326307 ha-326307
	I0919 22:23:13.834699   69358 network_create.go:108] docker network ha-326307 192.168.49.0/24 created
	I0919 22:23:13.834730   69358 kic.go:121] calculated static IP "192.168.49.2" for the "ha-326307" container
	I0919 22:23:13.834799   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:23:13.852316   69358 cli_runner.go:164] Run: docker volume create ha-326307 --label name.minikube.sigs.k8s.io=ha-326307 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:23:13.872969   69358 oci.go:103] Successfully created a docker volume ha-326307
	I0919 22:23:13.873115   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307 --entrypoint /usr/bin/test -v ha-326307:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:23:14.277718   69358 oci.go:107] Successfully prepared a docker volume ha-326307
	I0919 22:23:14.277762   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:14.277789   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:23:14.277852   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:23:18.547851   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.269954037s)
	I0919 22:23:18.547886   69358 kic.go:203] duration metric: took 4.270092787s to extract preloaded images to volume ...
	W0919 22:23:18.548002   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:23:18.548044   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:23:18.548091   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:23:18.602395   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307 --name ha-326307 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307 --network ha-326307 --ip 192.168.49.2 --volume ha-326307:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:23:18.902433   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Running}}
	I0919 22:23:18.923488   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:18.945324   69358 cli_runner.go:164] Run: docker exec ha-326307 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:23:18.998198   69358 oci.go:144] the created container "ha-326307" has a running status.
	I0919 22:23:18.998254   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa...
	I0919 22:23:19.305578   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:23:19.305639   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:23:19.338987   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:19.361057   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:23:19.361077   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:23:19.423644   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:19.446710   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:23:19.446815   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.468914   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.469178   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.469194   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:23:19.609654   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:23:19.609685   69358 ubuntu.go:182] provisioning hostname "ha-326307"
	I0919 22:23:19.609806   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.631352   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.631769   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.631790   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307 && echo "ha-326307" | sudo tee /etc/hostname
	I0919 22:23:19.783770   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:23:19.783868   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.802757   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.802967   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.802990   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:23:19.942778   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:23:19.942811   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:23:19.942925   69358 ubuntu.go:190] setting up certificates
	I0919 22:23:19.942949   69358 provision.go:84] configureAuth start
	I0919 22:23:19.943010   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:19.963444   69358 provision.go:143] copyHostCerts
	I0919 22:23:19.963491   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:19.963531   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:23:19.963541   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:19.963629   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:23:19.963778   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:19.963807   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:23:19.963811   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:19.963862   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:23:19.963997   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:19.964030   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:23:19.964040   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:19.964080   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:23:19.964187   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307 san=[127.0.0.1 192.168.49.2 ha-326307 localhost minikube]
	I0919 22:23:20.747311   69358 provision.go:177] copyRemoteCerts
	I0919 22:23:20.747377   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:23:20.747410   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:20.766468   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:20.866991   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:23:20.867057   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:23:20.897799   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:23:20.897858   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:23:20.925953   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:23:20.926026   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:23:20.954845   69358 provision.go:87] duration metric: took 1.011880735s to configureAuth
	I0919 22:23:20.954872   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:23:20.955074   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:20.955089   69358 machine.go:96] duration metric: took 1.508356629s to provisionDockerMachine
	I0919 22:23:20.955096   69358 client.go:171] duration metric: took 7.235738314s to LocalClient.Create
	I0919 22:23:20.955122   69358 start.go:167] duration metric: took 7.235806728s to libmachine.API.Create "ha-326307"
	I0919 22:23:20.955128   69358 start.go:293] postStartSetup for "ha-326307" (driver="docker")
	I0919 22:23:20.955136   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:23:20.955224   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:23:20.955259   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:20.975767   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.077921   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:23:21.081820   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:23:21.081872   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:23:21.081881   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:23:21.081888   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:23:21.081901   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:23:21.081973   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:23:21.082057   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:23:21.082071   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:23:21.082204   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:23:21.092245   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:21.123732   69358 start.go:296] duration metric: took 168.590139ms for postStartSetup
	I0919 22:23:21.124127   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:21.143109   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:21.143414   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:23:21.143466   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.162970   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.258062   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:23:21.263437   69358 start.go:128] duration metric: took 7.546444684s to createHost
	I0919 22:23:21.263491   69358 start.go:83] releasing machines lock for "ha-326307", held for 7.546570423s
	I0919 22:23:21.263561   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:21.282251   69358 ssh_runner.go:195] Run: cat /version.json
	I0919 22:23:21.282309   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.282391   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:23:21.282539   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.302076   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.302858   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.477003   69358 ssh_runner.go:195] Run: systemctl --version
	I0919 22:23:21.481946   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:23:21.486736   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:23:21.519470   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:23:21.519573   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:23:21.549703   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:23:21.549736   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:23:21.549772   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:23:21.549813   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:23:21.563897   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:23:21.577043   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:23:21.577104   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:23:21.591898   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:23:21.607905   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:23:21.677531   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:23:21.749223   69358 docker.go:234] disabling docker service ...
	I0919 22:23:21.749348   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:23:21.771648   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:23:21.786268   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:23:21.864247   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:23:21.930620   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:23:21.943680   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:23:21.963319   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:23:21.977473   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:23:21.989630   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:23:21.989705   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:23:22.001778   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:22.013415   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:23:22.024683   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:22.036042   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:23:22.047238   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:23:22.060239   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:23:22.074324   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:23:22.087081   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:23:22.099883   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:23:22.110348   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:22.180253   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:23:22.295748   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:23:22.295832   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:23:22.300535   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:23:22.300597   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:23:22.304676   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:23:22.344790   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:23:22.344850   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:22.371338   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:22.400934   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:23:22.402669   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:22.421952   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:23:22.426523   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:22.442415   69358 kubeadm.go:875] updating cluster {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:23:22.442712   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:22.442823   69358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:23:22.482684   69358 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:23:22.482710   69358 containerd.go:534] Images already preloaded, skipping extraction
	I0919 22:23:22.482762   69358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:23:22.518500   69358 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:23:22.518526   69358 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:23:22.518533   69358 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0919 22:23:22.518616   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:23:22.518668   69358 ssh_runner.go:195] Run: sudo crictl info
	I0919 22:23:22.554956   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:22.554993   69358 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:23:22.555004   69358 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:23:22.555029   69358 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-326307 NodeName:ha-326307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:23:22.555176   69358 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-326307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:23:22.555209   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:23:22.555273   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:23:22.568901   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:23:22.569038   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:23:22.569091   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:23:22.580223   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:23:22.580317   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:23:22.591268   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 22:23:22.612688   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:23:22.636770   69358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0919 22:23:22.658657   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:23:22.681384   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:23:22.685531   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:22.698340   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:22.769217   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:23:22.792280   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.2
	I0919 22:23:22.792300   69358 certs.go:194] generating shared ca certs ...
	I0919 22:23:22.792315   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.792509   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:23:22.792553   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:23:22.792563   69358 certs.go:256] generating profile certs ...
	I0919 22:23:22.792630   69358 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:23:22.792643   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt with IP's: []
	I0919 22:23:22.975725   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt ...
	I0919 22:23:22.975759   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt: {Name:mk32bca88dd6748516774b56251f96e4fc38a69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.975973   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key ...
	I0919 22:23:22.975990   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key: {Name:mkc0e836c004e527dbd2787dc00463a0715cf8a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.976108   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226
	I0919 22:23:22.976125   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:23:23.460427   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 ...
	I0919 22:23:23.460460   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226: {Name:mk98859e0e43a6d4b4da591dc89695908954cc81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.460672   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226 ...
	I0919 22:23:23.460693   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226: {Name:mk3473c1668aec72ec5a5598645b70e29415cdd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.460941   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:23:23.461078   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:23:23.461207   69358 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:23:23.461233   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt with IP's: []
	I0919 22:23:23.489621   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt ...
	I0919 22:23:23.489652   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt: {Name:mk06f3b4cfde33781bd7076ead00f94525257452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.489837   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key ...
	I0919 22:23:23.489860   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key: {Name:mk632a617a99ac85bf5a9b022d1173caf8e7b208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.489978   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:23:23.490003   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:23:23.490018   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:23:23.490034   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:23:23.490051   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:23:23.490069   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:23:23.490087   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:23:23.490100   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:23:23.490185   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:23:23.490228   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:23:23.490238   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:23:23.490273   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:23:23.490304   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:23:23.490333   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:23:23.490390   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:23.490435   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.490455   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.490497   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.491033   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:23:23.517815   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:23:23.544857   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:23:23.571386   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:23:23.600966   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:23:23.629855   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:23:23.657907   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:23:23.685564   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:23:23.713503   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:23:23.745344   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:23:23.774311   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:23:23.807603   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:23:23.832523   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:23:23.839649   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:23:23.851364   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.856325   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.856396   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.864469   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:23:23.876649   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:23:23.888129   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.892889   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.892949   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.901167   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:23:23.912487   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:23:23.924831   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.929289   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.929357   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.937110   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:23:23.948517   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:23:23.952948   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:23:23.953011   69358 kubeadm.go:392] StartCluster: {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:23.953080   69358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 22:23:23.953122   69358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:23:23.991138   69358 cri.go:89] found id: ""
	I0919 22:23:23.991247   69358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:23:24.003111   69358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:23:24.013643   69358 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:23:24.013714   69358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:23:24.024557   69358 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:23:24.024576   69358 kubeadm.go:157] found existing configuration files:
	
	I0919 22:23:24.024633   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:23:24.035252   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:23:24.035322   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:23:24.045590   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:23:24.056529   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:23:24.056590   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:23:24.066716   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:23:24.077570   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:23:24.077653   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:23:24.088177   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:23:24.098372   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:23:24.098426   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:23:24.108265   69358 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:23:24.149643   69358 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:23:24.149730   69358 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:23:24.166048   69358 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:23:24.166117   69358 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:23:24.166172   69358 kubeadm.go:310] OS: Linux
	I0919 22:23:24.166213   69358 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:23:24.166275   69358 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:23:24.166357   69358 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:23:24.166446   69358 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:23:24.166536   69358 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:23:24.166608   69358 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:23:24.166683   69358 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:23:24.166760   69358 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:23:24.230351   69358 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:23:24.230487   69358 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:23:24.230602   69358 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:23:24.238806   69358 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:23:24.243498   69358 out.go:252]   - Generating certificates and keys ...
	I0919 22:23:24.243610   69358 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:23:24.243715   69358 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:23:24.335199   69358 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:23:24.361175   69358 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:23:24.769077   69358 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:23:25.053293   69358 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:23:25.392067   69358 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:23:25.392251   69358 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-326307 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:23:25.629558   69358 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:23:25.629706   69358 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-326307 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:23:26.141828   69358 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:23:26.343650   69358 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:23:26.737207   69358 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:23:26.737292   69358 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:23:27.020543   69358 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:23:27.208963   69358 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:23:27.382044   69358 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:23:27.660395   69358 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:23:27.867964   69358 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:23:27.868475   69358 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:23:27.870857   69358 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:23:27.873408   69358 out.go:252]   - Booting up control plane ...
	I0919 22:23:27.873545   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:23:27.873665   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:23:27.873811   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:23:27.884709   69358 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:23:27.884874   69358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:23:27.892815   69358 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:23:27.893043   69358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:23:27.893108   69358 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:23:27.981591   69358 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:23:27.981772   69358 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:23:29.484085   69358 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501867716s
	I0919 22:23:29.488057   69358 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:23:29.488269   69358 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:23:29.488401   69358 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:23:29.488636   69358 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:23:31.058022   69358 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.569932465s
	I0919 22:23:31.762139   69358 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.27419796s
	I0919 22:23:33.991284   69358 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.503282233s
	I0919 22:23:34.005767   69358 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:23:34.017935   69358 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:23:34.032336   69358 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:23:34.032534   69358 kubeadm.go:310] [mark-control-plane] Marking the node ha-326307 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:23:34.042496   69358 kubeadm.go:310] [bootstrap-token] Using token: ym5hq4.pw1tvtip1io4ljbf
	I0919 22:23:34.044381   69358 out.go:252]   - Configuring RBAC rules ...
	I0919 22:23:34.044558   69358 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:23:34.048649   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:23:34.057509   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:23:34.061297   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:23:34.064926   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:23:34.069534   69358 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:23:34.399239   69358 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:23:34.818126   69358 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:23:35.398001   69358 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:23:35.398907   69358 kubeadm.go:310] 
	I0919 22:23:35.399007   69358 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:23:35.399035   69358 kubeadm.go:310] 
	I0919 22:23:35.399120   69358 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:23:35.399149   69358 kubeadm.go:310] 
	I0919 22:23:35.399207   69358 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:23:35.399301   69358 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:23:35.399350   69358 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:23:35.399356   69358 kubeadm.go:310] 
	I0919 22:23:35.399402   69358 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:23:35.399408   69358 kubeadm.go:310] 
	I0919 22:23:35.399470   69358 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:23:35.399481   69358 kubeadm.go:310] 
	I0919 22:23:35.399554   69358 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:23:35.399644   69358 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:23:35.399706   69358 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:23:35.399712   69358 kubeadm.go:310] 
	I0919 22:23:35.399803   69358 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:23:35.399888   69358 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:23:35.399892   69358 kubeadm.go:310] 
	I0919 22:23:35.399971   69358 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ym5hq4.pw1tvtip1io4ljbf \
	I0919 22:23:35.400068   69358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 \
	I0919 22:23:35.400089   69358 kubeadm.go:310] 	--control-plane 
	I0919 22:23:35.400093   69358 kubeadm.go:310] 
	I0919 22:23:35.400204   69358 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:23:35.400217   69358 kubeadm.go:310] 
	I0919 22:23:35.400285   69358 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ym5hq4.pw1tvtip1io4ljbf \
	I0919 22:23:35.400382   69358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 
	I0919 22:23:35.403119   69358 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:23:35.403274   69358 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:23:35.403305   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:35.403317   69358 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:23:35.407302   69358 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:23:35.409983   69358 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:23:35.415011   69358 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:23:35.415039   69358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:23:35.436210   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:23:35.679694   69358 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:23:35.679756   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:35.679779   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307 minikube.k8s.io/updated_at=2025_09_19T22_23_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=true
	I0919 22:23:35.787076   69358 ops.go:34] apiserver oom_adj: -16
	I0919 22:23:35.787237   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:36.287327   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:36.787300   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:37.287415   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:37.788066   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:38.287401   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:38.787731   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.288028   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.788301   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.864456   69358 kubeadm.go:1105] duration metric: took 4.184765822s to wait for elevateKubeSystemPrivileges
	I0919 22:23:39.864500   69358 kubeadm.go:394] duration metric: took 15.911493151s to StartCluster
	I0919 22:23:39.864524   69358 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:39.864601   69358 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:23:39.865911   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:39.866255   69358 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:39.866275   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:23:39.866288   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:23:39.866297   69358 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:23:39.866377   69358 addons.go:69] Setting storage-provisioner=true in profile "ha-326307"
	I0919 22:23:39.866398   69358 addons.go:238] Setting addon storage-provisioner=true in "ha-326307"
	I0919 22:23:39.866400   69358 addons.go:69] Setting default-storageclass=true in profile "ha-326307"
	I0919 22:23:39.866428   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:39.866523   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:39.866434   69358 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-326307"
	I0919 22:23:39.866921   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.867012   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.892851   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:23:39.893863   69358 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:23:39.893944   69358 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:23:39.893953   69358 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:23:39.894002   69358 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:23:39.894061   69358 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:23:39.893888   69358 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:23:39.894642   69358 addons.go:238] Setting addon default-storageclass=true in "ha-326307"
	I0919 22:23:39.894691   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:39.895196   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.895724   69358 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:23:39.897293   69358 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:23:39.897315   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:23:39.897386   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:39.923915   69358 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:23:39.923939   69358 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:23:39.924001   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:39.926323   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:39.953300   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:39.968501   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:23:40.065441   69358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:23:40.083647   69358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:23:40.190461   69358 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:23:40.433561   69358 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:23:40.435567   69358 addons.go:514] duration metric: took 569.25898ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:23:40.435633   69358 start.go:246] waiting for cluster config update ...
	I0919 22:23:40.435651   69358 start.go:255] writing updated cluster config ...
	I0919 22:23:40.437510   69358 out.go:203] 
	I0919 22:23:40.439070   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:40.439141   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:40.441238   69358 out.go:179] * Starting "ha-326307-m02" control-plane node in "ha-326307" cluster
	I0919 22:23:40.443382   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:23:40.445749   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:23:40.447079   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:40.447132   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:23:40.447229   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:23:40.447308   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:23:40.447326   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:23:40.447427   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:40.470325   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:23:40.470347   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:23:40.470366   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:23:40.470391   69358 start.go:360] acquireMachinesLock for ha-326307-m02: {Name:mk4919a9b19250804b0f53d01bcd11efaf9a431f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:23:40.470518   69358 start.go:364] duration metric: took 88.309µs to acquireMachinesLock for "ha-326307-m02"
	I0919 22:23:40.470552   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:40.470618   69358 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:23:40.473495   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:23:40.473607   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:23:40.473631   69358 client.go:168] LocalClient.Create starting
	I0919 22:23:40.473689   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:23:40.473724   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:40.473734   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:40.473828   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:23:40.473853   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:40.473861   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:40.474095   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:40.493916   69358 network_create.go:77] Found existing network {name:ha-326307 subnet:0xc000ad7620 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:23:40.493972   69358 kic.go:121] calculated static IP "192.168.49.3" for the "ha-326307-m02" container
	I0919 22:23:40.494055   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:23:40.516112   69358 cli_runner.go:164] Run: docker volume create ha-326307-m02 --label name.minikube.sigs.k8s.io=ha-326307-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:23:40.537046   69358 oci.go:103] Successfully created a docker volume ha-326307-m02
	I0919 22:23:40.537137   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m02 --entrypoint /usr/bin/test -v ha-326307-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:23:40.991997   69358 oci.go:107] Successfully prepared a docker volume ha-326307-m02
	I0919 22:23:40.992038   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:40.992061   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:23:40.992121   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:23:45.362629   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.370467998s)
	I0919 22:23:45.362666   69358 kic.go:203] duration metric: took 4.370603938s to extract preloaded images to volume ...
	W0919 22:23:45.362777   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:23:45.362811   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:23:45.362846   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:23:45.417833   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307-m02 --name ha-326307-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307-m02 --network ha-326307 --ip 192.168.49.3 --volume ha-326307-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:23:45.744363   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Running}}
	I0919 22:23:45.768456   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:45.789293   69358 cli_runner.go:164] Run: docker exec ha-326307-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:23:45.846760   69358 oci.go:144] the created container "ha-326307-m02" has a running status.
	I0919 22:23:45.846794   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa...
	I0919 22:23:46.005236   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:23:46.005288   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:23:46.042640   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:46.067424   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:23:46.067455   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:23:46.132729   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:46.155854   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:23:46.155967   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.177181   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.177511   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.177533   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:23:46.320054   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:23:46.320089   69358 ubuntu.go:182] provisioning hostname "ha-326307-m02"
	I0919 22:23:46.320185   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.341740   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.341951   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.341965   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m02 && echo "ha-326307-m02" | sudo tee /etc/hostname
	I0919 22:23:46.497123   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:23:46.497234   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.520214   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.520436   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.520455   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:23:46.659417   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:23:46.659458   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:23:46.659492   69358 ubuntu.go:190] setting up certificates
	I0919 22:23:46.659505   69358 provision.go:84] configureAuth start
	I0919 22:23:46.659556   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:46.679498   69358 provision.go:143] copyHostCerts
	I0919 22:23:46.679551   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:46.679598   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:23:46.679605   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:46.679712   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:23:46.679851   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:46.679882   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:23:46.679893   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:46.679947   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:23:46.680043   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:46.680141   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:23:46.680185   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:46.680251   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:23:46.680367   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m02 san=[127.0.0.1 192.168.49.3 ha-326307-m02 localhost minikube]
	I0919 22:23:46.869190   69358 provision.go:177] copyRemoteCerts
	I0919 22:23:46.869251   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:23:46.869285   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.888798   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:46.988385   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:23:46.988452   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:23:47.018227   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:23:47.018299   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:23:47.046810   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:23:47.046866   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:23:47.074372   69358 provision.go:87] duration metric: took 414.855982ms to configureAuth
	I0919 22:23:47.074400   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:23:47.074581   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:47.074598   69358 machine.go:96] duration metric: took 918.712366ms to provisionDockerMachine
	I0919 22:23:47.074607   69358 client.go:171] duration metric: took 6.600969352s to LocalClient.Create
	I0919 22:23:47.074631   69358 start.go:167] duration metric: took 6.601023702s to libmachine.API.Create "ha-326307"
	I0919 22:23:47.074642   69358 start.go:293] postStartSetup for "ha-326307-m02" (driver="docker")
	I0919 22:23:47.074650   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:23:47.074721   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:23:47.074767   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.094538   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.195213   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:23:47.199088   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:23:47.199139   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:23:47.199181   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:23:47.199191   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:23:47.199215   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:23:47.199276   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:23:47.199378   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:23:47.199394   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:23:47.199502   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:23:47.209642   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:47.240945   69358 start.go:296] duration metric: took 166.288086ms for postStartSetup
	I0919 22:23:47.241383   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:47.261061   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:47.261460   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:23:47.261513   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.280359   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.374609   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:23:47.379255   69358 start.go:128] duration metric: took 6.908623332s to createHost
	I0919 22:23:47.379283   69358 start.go:83] releasing machines lock for "ha-326307-m02", held for 6.908753842s
	I0919 22:23:47.379346   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:47.400418   69358 out.go:179] * Found network options:
	I0919 22:23:47.401854   69358 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:23:47.403072   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:23:47.403133   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:23:47.403263   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:23:47.403266   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:23:47.403326   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.403332   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.423928   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.424218   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.597529   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:23:47.630263   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:23:47.630334   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:23:47.661706   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:23:47.661733   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:23:47.661772   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:23:47.661826   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:23:47.675485   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:23:47.687726   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:23:47.687780   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:23:47.701818   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:23:47.717912   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:23:47.789825   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:23:47.863188   69358 docker.go:234] disabling docker service ...
	I0919 22:23:47.863267   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:23:47.881757   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:23:47.893830   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:23:47.963004   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:23:48.034120   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:23:48.046843   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:23:48.065279   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:23:48.078269   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:23:48.089105   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:23:48.089186   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:23:48.099867   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:48.111076   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:23:48.122049   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:48.132648   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:23:48.142263   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:23:48.152876   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:23:48.163459   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:23:48.174096   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:23:48.183483   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:23:48.192780   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:48.261004   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:23:48.364434   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:23:48.364508   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:23:48.368726   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:23:48.368792   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:23:48.372683   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:23:48.409110   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:23:48.409200   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:48.433389   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:48.460529   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:23:48.462207   69358 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:23:48.464087   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:48.482217   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:23:48.486620   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:48.498806   69358 mustload.go:65] Loading cluster: ha-326307
	I0919 22:23:48.499032   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:48.499315   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:48.518576   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:48.518850   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.3
	I0919 22:23:48.518866   69358 certs.go:194] generating shared ca certs ...
	I0919 22:23:48.518885   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.519012   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:23:48.519082   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:23:48.519096   69358 certs.go:256] generating profile certs ...
	I0919 22:23:48.519222   69358 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:23:48.519259   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4
	I0919 22:23:48.519288   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:23:48.963393   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 ...
	I0919 22:23:48.963428   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4: {Name:mk381f64cc0991e3a6417e9586b9565eb7a8dbf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.963635   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4 ...
	I0919 22:23:48.963660   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4: {Name:mk4dbead0b9c36c7a3635520729a1eb2d4b33f13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.963762   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:23:48.963935   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:23:48.964103   69358 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:23:48.964120   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:23:48.964138   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:23:48.964166   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:23:48.964183   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:23:48.964200   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:23:48.964218   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:23:48.964234   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:23:48.964251   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:23:48.964313   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:23:48.964355   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:23:48.964366   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:23:48.964406   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:23:48.964438   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:23:48.964471   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:23:48.964528   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:48.964570   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:23:48.964592   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:23:48.964612   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:48.964731   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:48.983907   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:49.073692   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:23:49.078819   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:23:49.094234   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:23:49.099593   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:23:49.113663   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:23:49.117744   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:23:49.133048   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:23:49.136861   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:23:49.150734   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:23:49.154901   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:23:49.169388   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:23:49.173566   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:23:49.188070   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:23:49.215594   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:23:49.243561   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:23:49.271624   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:23:49.301814   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:23:49.332556   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:23:49.360723   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:23:49.388872   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:23:49.417316   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:23:49.448722   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:23:49.476877   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:23:49.504914   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:23:49.524969   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:23:49.544942   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:23:49.564506   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:23:49.584887   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:23:49.605725   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:23:49.625552   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:23:49.645811   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:23:49.652062   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:23:49.664544   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.668823   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.668889   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.676892   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:23:49.688737   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:23:49.699741   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.703762   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.703823   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.711311   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:23:49.721987   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:23:49.732874   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.737289   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.737351   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.745312   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:23:49.756384   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:23:49.760242   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:23:49.760315   69358 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0919 22:23:49.760415   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:23:49.760438   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:23:49.760476   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:23:49.773427   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:23:49.773499   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:23:49.773549   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:23:49.784237   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:23:49.784306   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:23:49.794534   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:23:49.814529   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:23:49.837846   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:23:49.859421   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:23:49.863859   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:49.876721   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:49.948089   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:23:49.971010   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:49.971327   69358 start.go:317] joinCluster: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:49.971508   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:23:49.971618   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:49.992535   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:50.137695   69358 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:50.137740   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kb90tj.om7zof6htice1y8z --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:24:08.633363   69358 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kb90tj.om7zof6htice1y8z --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.495537277s)
	I0919 22:24:08.633404   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:24:08.849981   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307-m02 minikube.k8s.io/updated_at=2025_09_19T22_24_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=false
	I0919 22:24:08.928109   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-326307-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:24:09.011507   69358 start.go:319] duration metric: took 19.040175049s to joinCluster
	I0919 22:24:09.011590   69358 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:09.011816   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:09.013756   69358 out.go:179] * Verifying Kubernetes components...
	I0919 22:24:09.015232   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:09.115618   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:09.130578   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:24:09.130645   69358 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:24:09.130869   69358 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m02" to be "Ready" ...
	W0919 22:24:11.134373   69358 node_ready.go:57] node "ha-326307-m02" has "Ready":"False" status (will retry)
	I0919 22:24:11.634655   69358 node_ready.go:49] node "ha-326307-m02" is "Ready"
	I0919 22:24:11.634683   69358 node_ready.go:38] duration metric: took 2.503796185s for node "ha-326307-m02" to be "Ready" ...
	I0919 22:24:11.634697   69358 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:24:11.634751   69358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:24:11.647782   69358 api_server.go:72] duration metric: took 2.636155477s to wait for apiserver process to appear ...
	I0919 22:24:11.647812   69358 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:24:11.647848   69358 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:24:11.652005   69358 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:24:11.652952   69358 api_server.go:141] control plane version: v1.34.0
	I0919 22:24:11.652975   69358 api_server.go:131] duration metric: took 5.15649ms to wait for apiserver health ...
	I0919 22:24:11.652984   69358 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:24:11.657535   69358 system_pods.go:59] 17 kube-system pods found
	I0919 22:24:11.657569   69358 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:11.657577   69358 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:11.657581   69358 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:11.657586   69358 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Pending
	I0919 22:24:11.657591   69358 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:11.657598   69358 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Pending: PodScheduled:SchedulerError (pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed)
	I0919 22:24:11.657604   69358 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:11.657609   69358 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Pending
	I0919 22:24:11.657616   69358 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:11.657621   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Pending
	I0919 22:24:11.657626   69358 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:11.657636   69358 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q8mtj": pod kube-proxy-q8mtj is already assigned to node "ha-326307-m02")
	I0919 22:24:11.657642   69358 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:11.657649   69358 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Pending
	I0919 22:24:11.657654   69358 system_pods.go:61] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:11.657660   69358 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Pending
	I0919 22:24:11.657665   69358 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:11.657673   69358 system_pods.go:74] duration metric: took 4.68298ms to wait for pod list to return data ...
	I0919 22:24:11.657687   69358 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:24:11.660430   69358 default_sa.go:45] found service account: "default"
	I0919 22:24:11.660456   69358 default_sa.go:55] duration metric: took 2.762581ms for default service account to be created ...
	I0919 22:24:11.660467   69358 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:24:11.664515   69358 system_pods.go:86] 17 kube-system pods found
	I0919 22:24:11.664549   69358 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:11.664557   69358 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:11.664563   69358 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:11.664567   69358 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Pending
	I0919 22:24:11.664574   69358 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:11.664583   69358 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Pending: PodScheduled:SchedulerError (pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed)
	I0919 22:24:11.664590   69358 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:11.664594   69358 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Pending
	I0919 22:24:11.664600   69358 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:11.664606   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Pending
	I0919 22:24:11.664615   69358 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:11.664623   69358 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q8mtj": pod kube-proxy-q8mtj is already assigned to node "ha-326307-m02")
	I0919 22:24:11.664629   69358 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:11.664637   69358 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Pending
	I0919 22:24:11.664643   69358 system_pods.go:89] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:11.664649   69358 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Pending
	I0919 22:24:11.664653   69358 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:11.664663   69358 system_pods.go:126] duration metric: took 4.189005ms to wait for k8s-apps to be running ...
	I0919 22:24:11.664676   69358 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:24:11.664734   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:24:11.677679   69358 system_svc.go:56] duration metric: took 12.991783ms WaitForService to wait for kubelet
	I0919 22:24:11.677718   69358 kubeadm.go:578] duration metric: took 2.666095008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:11.677741   69358 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:24:11.681219   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:11.681249   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:11.681276   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:11.681282   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:11.681288   69358 node_conditions.go:105] duration metric: took 3.540774ms to run NodePressure ...
	I0919 22:24:11.681302   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:24:11.681336   69358 start.go:255] writing updated cluster config ...
	I0919 22:24:11.683465   69358 out.go:203] 
	I0919 22:24:11.685336   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:11.685480   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:11.687190   69358 out.go:179] * Starting "ha-326307-m03" control-plane node in "ha-326307" cluster
	I0919 22:24:11.688774   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:24:11.690230   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:11.691529   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:24:11.691564   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:11.691570   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:11.691776   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:11.691792   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:24:11.691940   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:11.714494   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:11.714516   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:11.714538   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:11.714564   69358 start.go:360] acquireMachinesLock for ha-326307-m03: {Name:mk07818636650a6efffb19d787e84a34d6f1dd98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:11.714717   69358 start.go:364] duration metric: took 129.412µs to acquireMachinesLock for "ha-326307-m03"
	I0919 22:24:11.714749   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:11.714883   69358 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:24:11.717146   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:11.717288   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:24:11.717325   69358 client.go:168] LocalClient.Create starting
	I0919 22:24:11.717396   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:24:11.717429   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:11.717444   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:11.717499   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:24:11.717523   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:11.717531   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:11.717757   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:11.736709   69358 network_create.go:77] Found existing network {name:ha-326307 subnet:0xc001c6a9f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:11.736749   69358 kic.go:121] calculated static IP "192.168.49.4" for the "ha-326307-m03" container
	I0919 22:24:11.736838   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:11.757855   69358 cli_runner.go:164] Run: docker volume create ha-326307-m03 --label name.minikube.sigs.k8s.io=ha-326307-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:11.780198   69358 oci.go:103] Successfully created a docker volume ha-326307-m03
	I0919 22:24:11.780287   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m03 --entrypoint /usr/bin/test -v ha-326307-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:12.269719   69358 oci.go:107] Successfully prepared a docker volume ha-326307-m03
	I0919 22:24:12.269772   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:24:12.269795   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:12.269864   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:16.658999   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.389088771s)
	I0919 22:24:16.659030   69358 kic.go:203] duration metric: took 4.389232064s to extract preloaded images to volume ...
	W0919 22:24:16.659114   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:16.659151   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:16.659211   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:16.714324   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307-m03 --name ha-326307-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307-m03 --network ha-326307 --ip 192.168.49.4 --volume ha-326307-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:17.029039   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Running}}
	I0919 22:24:17.050534   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.070017   69358 cli_runner.go:164] Run: docker exec ha-326307-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:17.125252   69358 oci.go:144] the created container "ha-326307-m03" has a running status.
	I0919 22:24:17.125293   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa...
	I0919 22:24:17.618351   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:17.618395   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:17.646956   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.667176   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:17.667203   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:17.713667   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.734276   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:17.734370   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:17.755726   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:17.755941   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:17.755953   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:17.894482   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:24:17.894512   69358 ubuntu.go:182] provisioning hostname "ha-326307-m03"
	I0919 22:24:17.894572   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:17.914204   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:17.914507   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:17.914530   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m03 && echo "ha-326307-m03" | sudo tee /etc/hostname
	I0919 22:24:18.068724   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:24:18.068805   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.088244   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:18.088504   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:18.088525   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:18.227353   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:18.227390   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:24:18.227421   69358 ubuntu.go:190] setting up certificates
	I0919 22:24:18.227433   69358 provision.go:84] configureAuth start
	I0919 22:24:18.227496   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.247948   69358 provision.go:143] copyHostCerts
	I0919 22:24:18.247989   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:24:18.248023   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:24:18.248029   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:24:18.248096   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:24:18.248231   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:24:18.248289   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:24:18.248299   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:24:18.248338   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:24:18.248404   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:24:18.248423   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:24:18.248427   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:24:18.248457   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:24:18.248512   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m03 san=[127.0.0.1 192.168.49.4 ha-326307-m03 localhost minikube]
	I0919 22:24:18.393257   69358 provision.go:177] copyRemoteCerts
	I0919 22:24:18.393319   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:18.393353   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.412748   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.514005   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:18.514092   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:24:18.542657   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:18.542733   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:18.569691   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:18.569759   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:18.596329   69358 provision.go:87] duration metric: took 368.876183ms to configureAuth
	I0919 22:24:18.596357   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:18.596551   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:18.596562   69358 machine.go:96] duration metric: took 862.263986ms to provisionDockerMachine
	I0919 22:24:18.596567   69358 client.go:171] duration metric: took 6.879237415s to LocalClient.Create
	I0919 22:24:18.596586   69358 start.go:167] duration metric: took 6.879300568s to libmachine.API.Create "ha-326307"
	I0919 22:24:18.596594   69358 start.go:293] postStartSetup for "ha-326307-m03" (driver="docker")
	I0919 22:24:18.596602   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:18.596644   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:18.596677   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.615349   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.717907   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:18.722093   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:18.722137   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:18.722150   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:18.722173   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:18.722186   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:24:18.722248   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:24:18.722356   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:24:18.722372   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:24:18.722580   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:18.732899   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:24:18.766453   69358 start.go:296] duration metric: took 169.843532ms for postStartSetup
	I0919 22:24:18.766899   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.786322   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:18.786775   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:18.786833   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.806377   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.901798   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:18.907121   69358 start.go:128] duration metric: took 7.192223106s to createHost
	I0919 22:24:18.907180   69358 start.go:83] releasing machines lock for "ha-326307-m03", held for 7.192445142s
	I0919 22:24:18.907266   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.929545   69358 out.go:179] * Found network options:
	I0919 22:24:18.931020   69358 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:24:18.932299   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932334   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932375   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932396   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:18.932501   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:18.932558   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.932588   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:18.932662   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.952990   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.953400   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:19.131622   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:19.165991   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:19.166079   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:19.197850   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:19.197878   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:24:19.197909   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:19.197960   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:24:19.211538   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:19.223959   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:24:19.224009   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:24:19.239088   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:24:19.254102   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:24:19.328965   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:24:19.406808   69358 docker.go:234] disabling docker service ...
	I0919 22:24:19.406888   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:24:19.425948   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:24:19.438801   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:24:19.510941   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:24:19.581470   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:19.594683   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:19.613666   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:19.627192   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:19.638603   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:19.638668   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:19.649965   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:19.661530   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:19.673111   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:19.684782   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:19.696056   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:19.707630   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:19.719687   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:19.731477   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:19.741738   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:19.751963   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:19.822277   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:19.931918   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:24:19.931995   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:24:19.936531   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:24:19.936591   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:24:19.940632   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:19.977944   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:24:19.978013   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:24:20.003290   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:24:20.032714   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:24:20.034190   69358 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:20.035560   69358 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:24:20.036915   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:20.055444   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:20.059762   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:20.072851   69358 mustload.go:65] Loading cluster: ha-326307
	I0919 22:24:20.073081   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:20.073298   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:24:20.091365   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:24:20.091605   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.4
	I0919 22:24:20.091616   69358 certs.go:194] generating shared ca certs ...
	I0919 22:24:20.091629   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.091746   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:24:20.091786   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:24:20.091796   69358 certs.go:256] generating profile certs ...
	I0919 22:24:20.091865   69358 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:24:20.091891   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604
	I0919 22:24:20.091905   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:24:20.372898   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 ...
	I0919 22:24:20.372943   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604: {Name:mk9b724916886d4c69140cc45e23ce082460d116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.373186   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604 ...
	I0919 22:24:20.373210   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604: {Name:mkfc0cd42f96faa2f697a81fc7ca671182c3cea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.373311   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:24:20.373471   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:24:20.373649   69358 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:24:20.373668   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:20.373682   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:20.373692   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:20.373703   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:20.373713   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:20.373723   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:20.373733   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:20.373743   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:20.373795   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:24:20.373823   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:20.373832   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:24:20.373856   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:24:20.373878   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:20.373899   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:20.373936   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:24:20.373962   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:24:20.373976   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:20.373987   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:24:20.374034   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:24:20.394051   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:24:20.484593   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:20.489010   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:20.503471   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:20.507649   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:24:20.522195   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:20.526410   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:20.541840   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:20.546043   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:24:20.560364   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:20.564230   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:20.577547   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:20.581387   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:20.594800   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:20.622991   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:20.651461   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:20.678113   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:20.705292   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:24:20.732489   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:20.762310   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:20.789808   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:20.819251   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:24:20.851010   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:20.879714   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:24:20.908177   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:20.928644   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:24:20.949340   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:20.969391   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:24:20.989837   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:21.011118   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:21.031485   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:21.052354   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:24:21.058486   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:24:21.069582   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.074372   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.074440   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.082186   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:21.092957   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:24:21.104085   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.108193   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.108258   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.116078   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:21.127607   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:21.139338   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.143794   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.143848   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.151321   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:21.162759   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:21.166499   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:21.166555   69358 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0919 22:24:21.166642   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:21.166677   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:21.166738   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:21.180123   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:21.180202   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:21.180261   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:21.189900   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:21.189963   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:21.200336   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:24:21.220715   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:21.244525   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:21.268789   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:21.272885   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:21.285764   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:21.362911   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:21.394403   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:24:21.394691   69358 start.go:317] joinCluster: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.394850   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:21.394898   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:24:21.419020   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:24:21.569927   69358 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:21.569980   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uhifqr.okdtfjqzhuoxbb2e --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:24:32.089764   69358 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uhifqr.okdtfjqzhuoxbb2e --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.519762438s)
	I0919 22:24:32.089793   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:24:32.309566   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307-m03 minikube.k8s.io/updated_at=2025_09_19T22_24_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=false
	I0919 22:24:32.391142   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-326307-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:24:32.471336   69358 start.go:319] duration metric: took 11.076641052s to joinCluster
	I0919 22:24:32.471402   69358 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:32.471770   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:32.473461   69358 out.go:179] * Verifying Kubernetes components...
	I0919 22:24:32.475427   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:32.579664   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:32.593786   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:24:32.593856   69358 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:24:32.594084   69358 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m03" to be "Ready" ...
	W0919 22:24:34.597297   69358 node_ready.go:57] node "ha-326307-m03" has "Ready":"False" status (will retry)
	I0919 22:24:35.098269   69358 node_ready.go:49] node "ha-326307-m03" is "Ready"
	I0919 22:24:35.098296   69358 node_ready.go:38] duration metric: took 2.504196997s for node "ha-326307-m03" to be "Ready" ...
	I0919 22:24:35.098310   69358 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:24:35.098358   69358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:24:35.111440   69358 api_server.go:72] duration metric: took 2.640014462s to wait for apiserver process to appear ...
	I0919 22:24:35.111465   69358 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:24:35.111483   69358 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:24:35.115724   69358 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:24:35.116810   69358 api_server.go:141] control plane version: v1.34.0
	I0919 22:24:35.116837   69358 api_server.go:131] duration metric: took 5.364462ms to wait for apiserver health ...
	I0919 22:24:35.116849   69358 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:24:35.123343   69358 system_pods.go:59] 27 kube-system pods found
	I0919 22:24:35.123372   69358 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:35.123377   69358 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:35.123380   69358 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:35.123384   69358 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:24:35.123387   69358 system_pods.go:61] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Pending
	I0919 22:24:35.123390   69358 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:35.123393   69358 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:24:35.123400   69358 system_pods.go:61] "kindnet-pnj9r" [14a458fc-0e9d-42e9-9473-f7f2a6f7f571] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-pnj9r": pod kindnet-pnj9r is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123408   69358 system_pods.go:61] "kindnet-qxwpq" [173e48ec-ef56-4824-9f55-a04b199b7943] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qxwpq": pod kindnet-qxwpq is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123416   69358 system_pods.go:61] "kindnet-wcct9" [5472dcae-344b-43fb-84d1-8d0d41852cd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wcct9": pod kindnet-wcct9 is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123427   69358 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:35.123433   69358 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:24:35.123445   69358 system_pods.go:61] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Pending
	I0919 22:24:35.123450   69358 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:35.123454   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running
	I0919 22:24:35.123457   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Pending
	I0919 22:24:35.123461   69358 system_pods.go:61] "kube-proxy-6nmjx" [81414747-6c4e-495e-a28d-cb17f0c0c306] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-6nmjx": pod kube-proxy-6nmjx is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123465   69358 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:35.123469   69358 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:24:35.123472   69358 system_pods.go:61] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ws89d": pod kube-proxy-ws89d is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123477   69358 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:35.123481   69358 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running
	I0919 22:24:35.123487   69358 system_pods.go:61] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Pending
	I0919 22:24:35.123489   69358 system_pods.go:61] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:35.123492   69358 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:24:35.123496   69358 system_pods.go:61] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Pending
	I0919 22:24:35.123503   69358 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:35.123511   69358 system_pods.go:74] duration metric: took 6.65469ms to wait for pod list to return data ...
	I0919 22:24:35.123525   69358 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:24:35.126592   69358 default_sa.go:45] found service account: "default"
	I0919 22:24:35.126616   69358 default_sa.go:55] duration metric: took 3.083846ms for default service account to be created ...
	I0919 22:24:35.126627   69358 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:24:35.131895   69358 system_pods.go:86] 27 kube-system pods found
	I0919 22:24:35.131928   69358 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:35.131936   69358 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:35.131941   69358 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:35.131946   69358 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:24:35.131950   69358 system_pods.go:89] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Pending
	I0919 22:24:35.131954   69358 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:35.131959   69358 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:24:35.131968   69358 system_pods.go:89] "kindnet-pnj9r" [14a458fc-0e9d-42e9-9473-f7f2a6f7f571] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-pnj9r": pod kindnet-pnj9r is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131975   69358 system_pods.go:89] "kindnet-qxwpq" [173e48ec-ef56-4824-9f55-a04b199b7943] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qxwpq": pod kindnet-qxwpq is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131986   69358 system_pods.go:89] "kindnet-wcct9" [5472dcae-344b-43fb-84d1-8d0d41852cd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wcct9": pod kindnet-wcct9 is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131993   69358 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:35.132003   69358 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:24:35.132009   69358 system_pods.go:89] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Pending
	I0919 22:24:35.132015   69358 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:35.132022   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running
	I0919 22:24:35.132028   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Pending
	I0919 22:24:35.132035   69358 system_pods.go:89] "kube-proxy-6nmjx" [81414747-6c4e-495e-a28d-cb17f0c0c306] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-6nmjx": pod kube-proxy-6nmjx is already assigned to node "ha-326307-m03")
	I0919 22:24:35.132044   69358 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:35.132050   69358 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:24:35.132057   69358 system_pods.go:89] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ws89d": pod kube-proxy-ws89d is already assigned to node "ha-326307-m03")
	I0919 22:24:35.132067   69358 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:35.132076   69358 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running
	I0919 22:24:35.132082   69358 system_pods.go:89] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Pending
	I0919 22:24:35.132090   69358 system_pods.go:89] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:35.132096   69358 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:24:35.132101   69358 system_pods.go:89] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Pending
	I0919 22:24:35.132107   69358 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:35.132117   69358 system_pods.go:126] duration metric: took 5.483041ms to wait for k8s-apps to be running ...
	I0919 22:24:35.132130   69358 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:24:35.132201   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:24:35.145901   69358 system_svc.go:56] duration metric: took 13.762213ms WaitForService to wait for kubelet
	I0919 22:24:35.145934   69358 kubeadm.go:578] duration metric: took 2.67451015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:35.145953   69358 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:24:35.149091   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149114   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149122   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149126   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149129   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149133   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149137   69358 node_conditions.go:105] duration metric: took 3.180117ms to run NodePressure ...
	I0919 22:24:35.149147   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:24:35.149187   69358 start.go:255] writing updated cluster config ...
	I0919 22:24:35.149520   69358 ssh_runner.go:195] Run: rm -f paused
	I0919 22:24:35.153920   69358 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:24:35.154452   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:35.158459   69358 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9j5pw" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.164361   69358 pod_ready.go:94] pod "coredns-66bc5c9577-9j5pw" is "Ready"
	I0919 22:24:35.164388   69358 pod_ready.go:86] duration metric: took 5.90604ms for pod "coredns-66bc5c9577-9j5pw" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.164396   69358 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wqvzd" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.170275   69358 pod_ready.go:94] pod "coredns-66bc5c9577-wqvzd" is "Ready"
	I0919 22:24:35.170305   69358 pod_ready.go:86] duration metric: took 5.903438ms for pod "coredns-66bc5c9577-wqvzd" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.221651   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.227692   69358 pod_ready.go:94] pod "etcd-ha-326307" is "Ready"
	I0919 22:24:35.227721   69358 pod_ready.go:86] duration metric: took 6.035355ms for pod "etcd-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.227738   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.234705   69358 pod_ready.go:94] pod "etcd-ha-326307-m02" is "Ready"
	I0919 22:24:35.234755   69358 pod_ready.go:86] duration metric: took 6.991962ms for pod "etcd-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.234769   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.355285   69358 request.go:683] "Waited before sending request" delay="120.371513ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326307-m03"
	I0919 22:24:35.555444   69358 request.go:683] "Waited before sending request" delay="196.344855ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:35.955374   69358 request.go:683] "Waited before sending request" delay="196.276117ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:35.958866   69358 pod_ready.go:94] pod "etcd-ha-326307-m03" is "Ready"
	I0919 22:24:35.958897   69358 pod_ready.go:86] duration metric: took 724.121102ms for pod "etcd-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.155371   69358 request.go:683] "Waited before sending request" delay="196.353052ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:24:36.158952   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.355354   69358 request.go:683] "Waited before sending request" delay="196.272183ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307"
	I0919 22:24:36.555231   69358 request.go:683] "Waited before sending request" delay="196.389456ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307"
	I0919 22:24:36.558900   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307" is "Ready"
	I0919 22:24:36.558927   69358 pod_ready.go:86] duration metric: took 399.940435ms for pod "kube-apiserver-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.558936   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.755357   69358 request.go:683] "Waited before sending request" delay="196.333509ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307-m02"
	I0919 22:24:36.955622   69358 request.go:683] "Waited before sending request" delay="196.371107ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:36.958850   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307-m02" is "Ready"
	I0919 22:24:36.958881   69358 pod_ready.go:86] duration metric: took 399.937855ms for pod "kube-apiserver-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.958892   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.155391   69358 request.go:683] "Waited before sending request" delay="196.40338ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307-m03"
	I0919 22:24:37.355336   69358 request.go:683] "Waited before sending request" delay="196.255836ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:37.358527   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307-m03" is "Ready"
	I0919 22:24:37.358558   69358 pod_ready.go:86] duration metric: took 399.659411ms for pod "kube-apiserver-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.555013   69358 request.go:683] "Waited before sending request" delay="196.298446ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0919 22:24:37.559362   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.755832   69358 request.go:683] "Waited before sending request" delay="196.350309ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307"
	I0919 22:24:37.954837   69358 request.go:683] "Waited before sending request" delay="195.286624ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307"
	I0919 22:24:37.958236   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307" is "Ready"
	I0919 22:24:37.958266   69358 pod_ready.go:86] duration metric: took 398.878465ms for pod "kube-controller-manager-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.958274   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.155758   69358 request.go:683] "Waited before sending request" delay="197.394867ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m02"
	I0919 22:24:38.355929   69358 request.go:683] "Waited before sending request" delay="196.396129ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:38.359268   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307-m02" is "Ready"
	I0919 22:24:38.359292   69358 pod_ready.go:86] duration metric: took 401.013168ms for pod "kube-controller-manager-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.359301   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.555606   69358 request.go:683] "Waited before sending request" delay="196.234039ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m03"
	I0919 22:24:38.755574   69358 request.go:683] "Waited before sending request" delay="196.387697ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:38.955366   69358 request.go:683] "Waited before sending request" delay="95.227976ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m03"
	I0919 22:24:39.154881   69358 request.go:683] "Waited before sending request" delay="196.301821ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:39.555649   69358 request.go:683] "Waited before sending request" delay="192.377634ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:39.955251   69358 request.go:683] "Waited before sending request" delay="92.286577ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	W0919 22:24:40.366591   69358 pod_ready.go:104] pod "kube-controller-manager-ha-326307-m03" is not "Ready", error: <nil>
	W0919 22:24:42.367386   69358 pod_ready.go:104] pod "kube-controller-manager-ha-326307-m03" is not "Ready", error: <nil>
	I0919 22:24:43.367824   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307-m03" is "Ready"
	I0919 22:24:43.367860   69358 pod_ready.go:86] duration metric: took 5.00855284s for pod "kube-controller-manager-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.371145   69358 pod_ready.go:83] waiting for pod "kube-proxy-8kxtv" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.376946   69358 pod_ready.go:94] pod "kube-proxy-8kxtv" is "Ready"
	I0919 22:24:43.376975   69358 pod_ready.go:86] duration metric: took 5.786362ms for pod "kube-proxy-8kxtv" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.376985   69358 pod_ready.go:83] waiting for pod "kube-proxy-q8mtj" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.555396   69358 request.go:683] "Waited before sending request" delay="178.323112ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8mtj"
	I0919 22:24:43.755331   69358 request.go:683] "Waited before sending request" delay="196.35612ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:43.758666   69358 pod_ready.go:94] pod "kube-proxy-q8mtj" is "Ready"
	I0919 22:24:43.758695   69358 pod_ready.go:86] duration metric: took 381.70368ms for pod "kube-proxy-q8mtj" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.758704   69358 pod_ready.go:83] waiting for pod "kube-proxy-ws89d" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.955265   69358 request.go:683] "Waited before sending request" delay="196.399278ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ws89d"
	I0919 22:24:44.155007   69358 request.go:683] "Waited before sending request" delay="196.303687ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:44.354881   69358 request.go:683] "Waited before sending request" delay="95.2124ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ws89d"
	I0919 22:24:44.555609   69358 request.go:683] "Waited before sending request" delay="197.246504ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:44.955613   69358 request.go:683] "Waited before sending request" delay="192.471154ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:45.355390   69358 request.go:683] "Waited before sending request" delay="92.281537ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	W0919 22:24:45.765195   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:48.265294   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:50.765471   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:53.265410   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:55.265474   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:57.765267   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:59.765483   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:02.266617   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:04.766256   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:07.265177   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:09.265694   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:11.765032   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:13.765313   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:15.766278   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	I0919 22:25:17.764644   69358 pod_ready.go:94] pod "kube-proxy-ws89d" is "Ready"
	I0919 22:25:17.764670   69358 pod_ready.go:86] duration metric: took 34.005951783s for pod "kube-proxy-ws89d" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.767738   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.772985   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307" is "Ready"
	I0919 22:25:17.773015   69358 pod_ready.go:86] duration metric: took 5.246042ms for pod "kube-scheduler-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.773023   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.778916   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307-m02" is "Ready"
	I0919 22:25:17.778942   69358 pod_ready.go:86] duration metric: took 5.914033ms for pod "kube-scheduler-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.778951   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.784122   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307-m03" is "Ready"
	I0919 22:25:17.784165   69358 pod_ready.go:86] duration metric: took 5.193982ms for pod "kube-scheduler-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.784183   69358 pod_ready.go:40] duration metric: took 42.630226972s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:17.833559   69358 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 22:25:17.835536   69358 out.go:179] * Done! kubectl is now configured to use "ha-326307" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7791f71e5d5a5       8c811b4aec35f       12 minutes ago      Running             busybox                   0                   b5e0c0fffea25       busybox-7b57f96db7-m8swj
	ca68bbc020e20       52546a367cc9e       14 minutes ago      Running             coredns                   0                   132023f334782       coredns-66bc5c9577-9j5pw
	1f618dc8f0392       52546a367cc9e       14 minutes ago      Running             coredns                   0                   a5ac32b4949ab       coredns-66bc5c9577-wqvzd
	f52d2d9f5881b       6e38f40d628db       14 minutes ago      Running             storage-provisioner       0                   7b77cca917bf4       storage-provisioner
	365cc00c2e009       409467f978b4a       14 minutes ago      Running             kindnet-cni               0                   96e027ec2b5fb       kindnet-gxnzs
	bd9e41958ffbb       df0860106674d       14 minutes ago      Running             kube-proxy                0                   06da62af16945       kube-proxy-8kxtv
	c6c963d9a0cae       765655ea60781       14 minutes ago      Running             kube-vip                  0                   5717652da0ef4       kube-vip-ha-326307
	456a0c3cbf5ce       46169d968e920       14 minutes ago      Running             kube-scheduler            0                   f02b9e82ff9b1       kube-scheduler-ha-326307
	05ab0247624a7       a0af72f2ec6d6       14 minutes ago      Running             kube-controller-manager   0                   6026f58e8c23a       kube-controller-manager-ha-326307
	e5c59a6abe977       5f1f5298c888d       14 minutes ago      Running             etcd                      0                   5f89382a468ad       etcd-ha-326307
	e80d65e3c7c18       90550c43ad2bc       14 minutes ago      Running             kube-apiserver            0                   3813626701bd1       kube-apiserver-ha-326307
	
	
	==> containerd <==
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.754439323Z" level=info msg="CreateContainer within sandbox \"a5ac32b4949abcc8c1007cd2947e92633d80d759aeaf0e7d6b490f2610f81170\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.768027085Z" level=info msg="CreateContainer within sandbox \"a5ac32b4949abcc8c1007cd2947e92633d80d759aeaf0e7d6b490f2610f81170\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\""
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.768844132Z" level=info msg="StartContainer for \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\""
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.836885904Z" level=info msg="StartContainer for \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\" returns successfully"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.632881043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9j5pw,Uid:7d073e38-b63e-494d-bda0-3dde372a950b,Namespace:kube-system,Attempt:0,}"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.759782586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9j5pw,Uid:7d073e38-b63e-494d-bda0-3dde372a950b,Namespace:kube-system,Attempt:0,} returns sandbox id \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.765750080Z" level=info msg="CreateContainer within sandbox \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.779792584Z" level=info msg="CreateContainer within sandbox \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.780572301Z" level=info msg="StartContainer for \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.854015268Z" level=info msg="StartContainer for \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\" returns successfully"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.151709073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-m8swj,Uid:7533a5f9-7c6d-4476-9e03-eb8abe0aadbc,Namespace:default,Attempt:0,}"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.267660233Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.268098400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-m8swj,Uid:7533a5f9-7c6d-4476-9e03-eb8abe0aadbc,Namespace:default,Attempt:0,} returns sandbox id \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\""
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.270196453Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.412014033Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.413088793Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.414707234Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.417602556Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.418335313Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 2.148090964s"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.418383876Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.423388311Z" level=info msg="CreateContainer within sandbox \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.442455841Z" level=info msg="CreateContainer within sandbox \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.443119612Z" level=info msg="StartContainer for \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.497884940Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.500641712Z" level=info msg="StartContainer for \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\" returns successfully"
	
	
	==> coredns [1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54337 - 24572 "HINFO IN 5143313645322175939.5313042790825403134. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069732464s
	[INFO] 10.244.0.4:35490 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000326279s
	[INFO] 10.244.0.4:55330 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.014239882s
	[INFO] 10.244.1.2:39628 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210602s
	[INFO] 10.244.1.2:46891 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.001261026s
	[INFO] 10.244.1.2:43124 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.00098216s
	[INFO] 10.244.1.2:49555 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.00013424s
	[INFO] 10.244.0.4:40362 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00024135s
	[INFO] 10.244.0.4:45629 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168694s
	[INFO] 10.244.1.2:52354 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189457s
	[INFO] 10.244.1.2:43857 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161715s
	[INFO] 10.244.1.2:51922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145764s
	[INFO] 10.244.1.2:57320 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009888497s
	[INFO] 10.244.1.2:49841 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169285s
	[INFO] 10.244.0.4:51548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159656s
	[INFO] 10.244.0.4:48681 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110507s
	[INFO] 10.244.1.2:52993 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137337s
	[INFO] 10.244.0.4:59855 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113524s
	[INFO] 10.244.1.2:56284 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188608s
	[INFO] 10.244.1.2:58675 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149085s
	[INFO] 10.244.1.2:38911 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131505s
	
	
	==> coredns [ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93] <==
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49588 - 50300 "HINFO IN 9047056621409016881.2982736294753326061. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063128768s
	[INFO] 10.244.0.4:39328 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.014249598s
	[INFO] 10.244.0.4:59759 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.013332957s
	[INFO] 10.244.0.4:50336 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.008865788s
	[INFO] 10.244.1.2:42753 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000158745s
	[INFO] 10.244.0.4:52334 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261159s
	[INFO] 10.244.0.4:43558 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010645205s
	[INFO] 10.244.0.4:51059 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154122s
	[INFO] 10.244.0.4:46147 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012594143s
	[INFO] 10.244.0.4:39163 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143755s
	[INFO] 10.244.0.4:57061 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014731s
	[INFO] 10.244.1.2:59502 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000129746s
	[INFO] 10.244.1.2:49570 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172915s
	[INFO] 10.244.1.2:48519 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000175653s
	[INFO] 10.244.0.4:50569 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000326714s
	[INFO] 10.244.0.4:45465 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000234038s
	[INFO] 10.244.1.2:52569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176154s
	[INFO] 10.244.1.2:36719 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000205481s
	[INFO] 10.244.1.2:58705 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195468s
	[INFO] 10.244.0.4:38035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216169s
	[INFO] 10.244.0.4:52287 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129599s
	[INFO] 10.244.0.4:37285 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000186803s
	[INFO] 10.244.1.2:39163 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165716s
	
	
	==> describe nodes <==
	Name:               ha-326307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_23_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:23:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:37:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-326307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 2616418f44a84ee78b49dce19e95d1fb
	  System UUID:                9c3f30ed-68b2-4a1c-af95-9031ae210a78
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-m8swj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-9j5pw             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 coredns-66bc5c9577-wqvzd             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 etcd-ha-326307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kindnet-gxnzs                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-326307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-326307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-8kxtv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-326307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-326307                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	
	
	Name:               ha-326307-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:37:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-326307-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 e4f3b60b3b464269bc193e23d4361613
	  System UUID:                8095cd89-f43b-4d8a-adef-b40d6aaa7ad2
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tfpvf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-326307-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-mk6pv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-326307-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-326307-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-q8mtj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-326307-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-326307-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        13m   kube-proxy       
	  Normal  RegisteredNode  13m   node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	
	
	Name:               ha-326307-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:37:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-326307-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 1434e19b2a274233a619428a76d99322
	  System UUID:                5814a8d4-c435-490f-8e5e-a8b038e01be7
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-jdczt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-326307-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-dmxl8                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-326307-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-326307-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-ws89d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-326307-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-326307-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  13m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	
	
	==> dmesg <==
	[Sep19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001853] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001006] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.090013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.467231] i8042: Warning: Keylock active
	[  +0.009747] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004210] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001075] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000901] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000908] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001042] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001465] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.534799] block sda: the capability attribute has been deprecated.
	[  +0.099617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027269] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.089616] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [e5c59a6abe97751de42afd27010936d1c3a401fad6cd730e75a1692a895b4fbc] <==
	{"level":"warn","ts":"2025-09-19T22:24:25.352519Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.352532Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.355631Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5512420eb470d1ce","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-19T22:24:25.355692Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.355712Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.427429Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.428290Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:24:25.447984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:32950","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:24:25.491427Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(6130034673728934350 12593026477526642892 16449250771884659557)"}
	{"level":"info","ts":"2025-09-19T22:24:25.491593Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.491634Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:24:25.493734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:32956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:25.530775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:32980","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:24:25.607668Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"e4477a6cd7815365","bytes":946167,"size":"946 kB","took":"30.009579431s"}
	{"level":"info","ts":"2025-09-19T22:24:29.797825Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:31.923615Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:35.871798Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:53.749925Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:55.314881Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"5512420eb470d1ce","bytes":1356311,"size":"1.4 MB","took":"30.015547589s"}
	{"level":"info","ts":"2025-09-19T22:33:30.750666Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1558}
	{"level":"info","ts":"2025-09-19T22:33:30.775074Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1558,"took":"23.935678ms","hash":623549535,"current-db-size-bytes":4292608,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":2306048,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-09-19T22:33:30.775132Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":623549535,"revision":1558,"compact-revision":-1}
	{"level":"info","ts":"2025-09-19T22:37:33.574674Z","caller":"traceutil/trace.go:172","msg":"trace[1629775233] transaction","detail":"{read_only:false; response_revision:2889; number_of_response:1; }","duration":"112.632235ms","start":"2025-09-19T22:37:33.462006Z","end":"2025-09-19T22:37:33.574639Z","steps":["trace[1629775233] 'process raft request'  (duration: 112.400333ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:37:33.947726Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.776182ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040082596420208 > lease_revoke:<id:51ce99641422bfa2>","response":"size:29"}
	{"level":"info","ts":"2025-09-19T22:37:33.947978Z","caller":"traceutil/trace.go:172","msg":"trace[2038413] transaction","detail":"{read_only:false; response_revision:2890; number_of_response:1; }","duration":"121.321226ms","start":"2025-09-19T22:37:33.826642Z","end":"2025-09-19T22:37:33.947963Z","steps":["trace[2038413] 'process raft request'  (duration: 121.201718ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:37:59 up  1:20,  0 users,  load average: 1.23, 0.77, 0.76
	Linux ha-326307 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [365cc00c2e009eeed7e71d1202a4d406c12f0d9faee38762ba691eb2d7c71f89] <==
	I0919 22:37:10.992614       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:37:20.990243       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:37:20.990316       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:37:20.990527       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:37:20.990541       1 main.go:301] handling current node
	I0919 22:37:20.990553       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:37:20.990557       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:37:30.996295       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:37:30.996331       1 main.go:301] handling current node
	I0919 22:37:30.996346       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:37:30.996350       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:37:30.996547       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:37:30.996562       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:37:40.997255       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:37:40.997293       1 main.go:301] handling current node
	I0919 22:37:40.997312       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:37:40.997319       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:37:40.997531       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:37:40.997546       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:37:50.998652       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:37:50.998692       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:37:50.998942       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:37:50.998959       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:37:50.999080       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:37:50.999094       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e80d65e3c7c18da87f7fa003e39382f5a4285ba4782fc295197421c6b882a161] <==
	I0919 22:32:15.996526       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:22.110278       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:33:31.733595       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:33:36.316232       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:33:41.440724       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:43.430235       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:04.843923       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:47.576277       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:36:07.778568       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:37:07.288814       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:37:22.531524       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43412: use of closed network connection
	E0919 22:37:22.776721       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43434: use of closed network connection
	E0919 22:37:22.970082       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43448: use of closed network connection
	E0919 22:37:23.110093       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43464: use of closed network connection
	E0919 22:37:23.308629       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43484: use of closed network connection
	E0919 22:37:23.494833       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43500: use of closed network connection
	E0919 22:37:23.634448       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43520: use of closed network connection
	E0919 22:37:23.803885       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43532: use of closed network connection
	E0919 22:37:23.968210       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43546: use of closed network connection
	E0919 22:37:26.548300       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43614: use of closed network connection
	E0919 22:37:26.721861       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43630: use of closed network connection
	E0919 22:37:26.901556       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43648: use of closed network connection
	E0919 22:37:27.077249       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43672: use of closed network connection
	E0919 22:37:27.253310       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43700: use of closed network connection
	I0919 22:37:36.706481       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [05ab0247624a7f8ffa6bc948e3abc3adc49911c297291eb0a6dd42e3df39f4cd] <==
	I0919 22:23:38.744532       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:23:38.744726       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0919 22:23:38.744739       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0919 22:23:38.744729       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:23:38.744737       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:23:38.744759       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 22:23:38.745195       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 22:23:38.745255       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:23:38.746448       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:23:38.748706       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 22:23:38.750017       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.750086       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:23:38.751270       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 22:23:38.760899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 22:23:38.760926       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 22:23:38.760971       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 22:23:38.765332       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.771790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:08.307746       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m02\" does not exist"
	I0919 22:24:08.319829       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:24:08.699971       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m02"
	E0919 22:24:31.036531       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8ztpb failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8ztpb\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:24:31.706808       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m03\" does not exist"
	I0919 22:24:31.736561       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:24:33.715916       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m03"
	
	
	==> kube-proxy [bd9e41958ffbbb27dab3d180a56fb27df36a1a1896db3a6f322a8aabcda57677] <==
	I0919 22:23:40.183862       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:23:40.251957       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:23:40.353105       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:23:40.353291       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:23:40.353503       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:23:40.383440       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:23:40.383522       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:23:40.391534       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:23:40.391999       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:23:40.392045       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:23:40.394189       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:23:40.394304       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:23:40.394470       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:23:40.394480       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:23:40.394266       1 config.go:200] "Starting service config controller"
	I0919 22:23:40.394506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:23:40.394279       1 config.go:309] "Starting node config controller"
	I0919 22:23:40.394533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:23:40.394540       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:23:40.494617       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:23:40.494643       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:23:40.494649       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [456a0c3cbf5ce028b9cbac658728c1fee13ad8e2659bfa0c625cd685d711c708] <==
	E0919 22:23:33.115040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:23:33.129278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:23:33.194774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:23:33.337699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0919 22:23:35.055089       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:24:08.346116       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:08.346301       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:08.365410       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-78xs2" node="ha-326307-m02"
	E0919 22:24:08.365600       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="kube-system/kindnet-78xs2"
	E0919 22:24:08.379248       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"kindnet-78xs2\" not found" pod="kube-system/kindnet-78xs2"
	E0919 22:24:10.002296       1 schedule_one.go:975] "Scheduler cache AssumePod failed" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:10.002334       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	I0919 22:24:10.002368       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:31.751287       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-pnj9r" node="ha-326307-m03"
	E0919 22:24:31.751375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-pnj9r"
	E0919 22:24:31.887089       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:31.887576       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 173e48ec-ef56-4824-9f55-a04b199b7943(kube-system/kindnet-qxwpq) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	E0919 22:24:31.887605       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	I0919 22:24:31.888969       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:35.828083       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-nzzlb" node="ha-326307-m03"
	E0919 22:24:35.828187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-nzzlb"
	E0919 22:24:35.839864       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	E0919 22:24:35.839940       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ba4fd407-2e93-4324-ab2d-4f192d79fdf5(kube-system/kindnet-dmxl8) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	E0919 22:24:35.839964       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	I0919 22:24:35.841757       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	
	
	==> kubelet <==
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638035    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-cni-cfg\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638087    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-xtables-lock\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638115    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70be5fcc-7ab6-4eb1-870d-988fee1a01bb-kube-proxy\") pod \"kube-proxy-8kxtv\" (UID: \"70be5fcc-7ab6-4eb1-870d-988fee1a01bb\") " pod="kube-system/kube-proxy-8kxtv"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140870    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64376c4d-1b82-490d-887d-7f628b134014-config-volume\") pod \"coredns-66bc5c9577-wqvzd\" (UID: \"64376c4d-1b82-490d-887d-7f628b134014\") " pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140945    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d073e38-b63e-494d-bda0-3dde372a950b-config-volume\") pod \"coredns-66bc5c9577-9j5pw\" (UID: \"7d073e38-b63e-494d-bda0-3dde372a950b\") " pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140976    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkhk\" (UniqueName: \"kubernetes.io/projected/64376c4d-1b82-490d-887d-7f628b134014-kube-api-access-8tkhk\") pod \"coredns-66bc5c9577-wqvzd\" (UID: \"64376c4d-1b82-490d-887d-7f628b134014\") " pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.141004    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gmbw\" (UniqueName: \"kubernetes.io/projected/7d073e38-b63e-494d-bda0-3dde372a950b-kube-api-access-8gmbw\") pod \"coredns-66bc5c9577-9j5pw\" (UID: \"7d073e38-b63e-494d-bda0-3dde372a950b\") " pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319752    1670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\""
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319858    1670 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\"" pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319884    1670 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\"" pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319966    1670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wqvzd_kube-system(64376c4d-1b82-490d-887d-7f628b134014)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wqvzd_kube-system(64376c4d-1b82-490d-887d-7f628b134014)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\\\": failed to find network info for sandbox \\\"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\\\"\"" pod="kube-system/coredns-66bc5c9577-wqvzd" podUID="64376c4d-1b82-490d-887d-7f628b134014"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332044    1670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\""
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332130    1670 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\"" pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332205    1670 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\"" pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332288    1670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9j5pw_kube-system(7d073e38-b63e-494d-bda0-3dde372a950b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9j5pw_kube-system(7d073e38-b63e-494d-bda0-3dde372a950b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\\\": failed to find network info for sandbox \\\"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\\\"\"" pod="kube-system/coredns-66bc5c9577-9j5pw" podUID="7d073e38-b63e-494d-bda0-3dde372a950b"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.543914    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cafe04c6-2dce-4b93-b6d1-205efc39b360-tmp\") pod \"storage-provisioner\" (UID: \"cafe04c6-2dce-4b93-b6d1-205efc39b360\") " pod="kube-system/storage-provisioner"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.543969    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47vqf\" (UniqueName: \"kubernetes.io/projected/cafe04c6-2dce-4b93-b6d1-205efc39b360-kube-api-access-47vqf\") pod \"storage-provisioner\" (UID: \"cafe04c6-2dce-4b93-b6d1-205efc39b360\") " pod="kube-system/storage-provisioner"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.684901    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gxnzs" podStartSLOduration=1.68487896 podStartE2EDuration="1.68487896s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:40.684630982 +0000 UTC m=+6.151051272" watchObservedRunningTime="2025-09-19 22:23:40.68487896 +0000 UTC m=+6.151299251"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.685802    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8kxtv" podStartSLOduration=1.685781067 podStartE2EDuration="1.685781067s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:40.670987608 +0000 UTC m=+6.137407898" watchObservedRunningTime="2025-09-19 22:23:40.685781067 +0000 UTC m=+6.152201360"
	Sep 19 22:23:41 ha-326307 kubelet[1670]: I0919 22:23:41.676063    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.676036489 podStartE2EDuration="1.676036489s" podCreationTimestamp="2025-09-19 22:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:41.675998333 +0000 UTC m=+7.142418624" watchObservedRunningTime="2025-09-19 22:23:41.676036489 +0000 UTC m=+7.142456778"
	Sep 19 22:23:45 ha-326307 kubelet[1670]: I0919 22:23:45.164667    1670 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:23:45 ha-326307 kubelet[1670]: I0919 22:23:45.165981    1670 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:23:52 ha-326307 kubelet[1670]: I0919 22:23:52.703916    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wqvzd" podStartSLOduration=13.703896267 podStartE2EDuration="13.703896267s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:52.703429297 +0000 UTC m=+18.169849612" watchObservedRunningTime="2025-09-19 22:23:52.703896267 +0000 UTC m=+18.170316558"
	Sep 19 22:23:56 ha-326307 kubelet[1670]: I0919 22:23:56.724956    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9j5pw" podStartSLOduration=17.724936721 podStartE2EDuration="17.724936721s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:56.724564031 +0000 UTC m=+22.190984322" watchObservedRunningTime="2025-09-19 22:23:56.724936721 +0000 UTC m=+22.191357012"
	Sep 19 22:25:18 ha-326307 kubelet[1670]: I0919 22:25:18.904730    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt2kb\" (UniqueName: \"kubernetes.io/projected/7533a5f9-7c6d-4476-9e03-eb8abe0aadbc-kube-api-access-rt2kb\") pod \"busybox-7b57f96db7-m8swj\" (UID: \"7533a5f9-7c6d-4476-9e03-eb8abe0aadbc\") " pod="default/busybox-7b57f96db7-m8swj"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-326307 -n ha-326307
helpers_test.go:269: (dbg) Run:  kubectl --context ha-326307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-jdczt
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-326307 describe pod busybox-7b57f96db7-jdczt
helpers_test.go:290: (dbg) kubectl --context ha-326307 describe pod busybox-7b57f96db7-jdczt:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-jdczt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ha-326307-m03/192.168.49.4
	Start Time:       Fri, 19 Sep 2025 22:25:18 +0000
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwg8l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xwg8l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                  Age                   From               Message
	  ----     ------                  ----                  ----               -------
	  Warning  FailedScheduling        12m                   default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-jdczt": pod busybox-7b57f96db7-jdczt is already assigned to node "ha-326307-m03"
	  Warning  FailedScheduling        12m                   default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-jdczt": pod busybox-7b57f96db7-jdczt is already assigned to node "ha-326307-m03"
	  Normal   Scheduled               12m                   default-scheduler  Successfully assigned default/busybox-7b57f96db7-jdczt to ha-326307-m03
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f949ef20a496c1ef4510b9586bfdf0aa02ea1ca9948f762b5576ef36acab80c9": failed to find network info for sandbox "f949ef20a496c1ef4510b9586bfdf0aa02ea1ca9948f762b5576ef36acab80c9"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "306b20c8e47aaeb0b6ae068e406157020ceddab45da2b4f2ab7d80c0e47f4391": failed to find network info for sandbox "306b20c8e47aaeb0b6ae068e406157020ceddab45da2b4f2ab7d80c0e47f4391"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e6c50e24733dc1514dd610f1c51f99bc1a57d10929036ae887a87c4b187b9ac1": failed to find network info for sandbox "e6c50e24733dc1514dd610f1c51f99bc1a57d10929036ae887a87c4b187b9ac1"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9d4e7715d1c071862624112264db649229347a018044c9075df60fb9940c8e8a": failed to find network info for sandbox "9d4e7715d1c071862624112264db649229347a018044c9075df60fb9940c8e8a"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d188384b76e1cf43ce05a368351c54023e455d3fd0fddf79dc0d717558b93ee6": failed to find network info for sandbox "d188384b76e1cf43ce05a368351c54023e455d3fd0fddf79dc0d717558b93ee6"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "726dbde7347664ddd373a329867d125e92a7173ca43b01448ce154579a81a0bb": failed to find network info for sandbox "726dbde7347664ddd373a329867d125e92a7173ca43b01448ce154579a81a0bb"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e45f823e38e8e10ed14077a52e1750763b0c366d8a775bbf53f656c10861f185": failed to find network info for sandbox "e45f823e38e8e10ed14077a52e1750763b0c366d8a775bbf53f656c10861f185"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "92ae39dcfd89289f5a5fdc5ae0c23196a91b58214916aead7f95620a2697c009": failed to find network info for sandbox "92ae39dcfd89289f5a5fdc5ae0c23196a91b58214916aead7f95620a2697c009"
	  Warning  FailedCreatePodSandBox  10m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "98a14fefd702a6a8ff4d95d8bebac62053e439ee134d840d76e175ba4e8c45d6": failed to find network info for sandbox "98a14fefd702a6a8ff4d95d8bebac62053e439ee134d840d76e175ba4e8c45d6"
	  Warning  FailedCreatePodSandBox  2m34s (x39 over 10m)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "68cb483b1808e73ea325cca055c7a7f1bd2a591a81aa8b2bcb8cb96560fd08b2": failed to find network info for sandbox "68cb483b1808e73ea325cca055c7a7f1bd2a591a81aa8b2bcb8cb96560fd08b2"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (31.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 status --output json --alsologtostderr -v 5: exit status 7 (767.013655ms)

                                                
                                                
-- stdout --
	[{"Name":"ha-326307","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-326307-m02","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-326307-m03","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-326307-m04","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:38:01.400227   88743 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:38:01.400350   88743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:01.400365   88743 out.go:374] Setting ErrFile to fd 2...
	I0919 22:38:01.400369   88743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:01.400632   88743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:38:01.400822   88743 out.go:368] Setting JSON to true
	I0919 22:38:01.400844   88743 mustload.go:65] Loading cluster: ha-326307
	I0919 22:38:01.400994   88743 notify.go:220] Checking for updates...
	I0919 22:38:01.401346   88743 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:38:01.401380   88743 status.go:174] checking status of ha-326307 ...
	I0919 22:38:01.401813   88743 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:38:01.423358   88743 status.go:371] ha-326307 host status = "Running" (err=<nil>)
	I0919 22:38:01.423410   88743 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:38:01.423691   88743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:38:01.442623   88743 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:38:01.442893   88743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:01.442947   88743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:38:01.462600   88743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:38:01.558143   88743 ssh_runner.go:195] Run: systemctl --version
	I0919 22:38:01.563073   88743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:01.576357   88743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:38:01.638963   88743 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:38:01.626612554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:38:01.639801   88743 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:01.639838   88743 api_server.go:166] Checking apiserver status ...
	I0919 22:38:01.639892   88743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:01.652988   88743 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0919 22:38:01.666310   88743 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:01.666402   88743 ssh_runner.go:195] Run: ls
	I0919 22:38:01.670687   88743 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:01.677899   88743 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:01.677931   88743 status.go:463] ha-326307 apiserver status = Running (err=<nil>)
	I0919 22:38:01.677943   88743 status.go:176] ha-326307 status: &{Name:ha-326307 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:01.677961   88743 status.go:174] checking status of ha-326307-m02 ...
	I0919 22:38:01.678292   88743 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:38:01.699321   88743 status.go:371] ha-326307-m02 host status = "Running" (err=<nil>)
	I0919 22:38:01.699352   88743 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:38:01.699701   88743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:38:01.720052   88743 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:38:01.720348   88743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:01.720408   88743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:38:01.739652   88743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:38:01.835959   88743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:01.848803   88743 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:01.848835   88743 api_server.go:166] Checking apiserver status ...
	I0919 22:38:01.848890   88743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:01.863652   88743 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1434/cgroup
	W0919 22:38:01.875331   88743 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1434/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:01.875404   88743 ssh_runner.go:195] Run: ls
	I0919 22:38:01.879330   88743 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:01.883834   88743 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:01.883857   88743 status.go:463] ha-326307-m02 apiserver status = Running (err=<nil>)
	I0919 22:38:01.883865   88743 status.go:176] ha-326307-m02 status: &{Name:ha-326307-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:01.883887   88743 status.go:174] checking status of ha-326307-m03 ...
	I0919 22:38:01.884112   88743 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:38:01.902969   88743 status.go:371] ha-326307-m03 host status = "Running" (err=<nil>)
	I0919 22:38:01.902994   88743 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:38:01.903304   88743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:38:01.925658   88743 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:38:01.926052   88743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:01.926102   88743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:38:01.948883   88743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:38:02.044322   88743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:02.058996   88743 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:02.059022   88743 api_server.go:166] Checking apiserver status ...
	I0919 22:38:02.059063   88743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:02.071784   88743 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	W0919 22:38:02.082789   88743 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:02.082864   88743 ssh_runner.go:195] Run: ls
	I0919 22:38:02.086796   88743 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:02.090988   88743 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:02.091007   88743 status.go:463] ha-326307-m03 apiserver status = Running (err=<nil>)
	I0919 22:38:02.091014   88743 status.go:176] ha-326307-m03 status: &{Name:ha-326307-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:02.091030   88743 status.go:174] checking status of ha-326307-m04 ...
	I0919 22:38:02.091306   88743 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:38:02.111129   88743 status.go:371] ha-326307-m04 host status = "Stopped" (err=<nil>)
	I0919 22:38:02.111149   88743 status.go:384] host is not running, skipping remaining checks
	I0919 22:38:02.111185   88743 status.go:176] ha-326307-m04 status: &{Name:ha-326307-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp testdata/cp-test.txt ha-326307:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp ha-326307:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile865153148/001/cp-test_ha-326307.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp ha-326307:/home/docker/cp-test.txt ha-326307-m02:/home/docker/cp-test_ha-326307_ha-326307-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m02 "sudo cat /home/docker/cp-test_ha-326307_ha-326307-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp ha-326307:/home/docker/cp-test.txt ha-326307-m03:/home/docker/cp-test_ha-326307_ha-326307-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m03 "sudo cat /home/docker/cp-test_ha-326307_ha-326307-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp ha-326307:/home/docker/cp-test.txt ha-326307-m04:/home/docker/cp-test_ha-326307_ha-326307-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 cp ha-326307:/home/docker/cp-test.txt ha-326307-m04:/home/docker/cp-test_ha-326307_ha-326307-m04.txt: exit status 1 (159.290244ms)

                                                
                                                
** stderr ** 
	getting host: "ha-326307-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 cp ha-326307:/home/docker/cp-test.txt ha-326307-m04:/home/docker/cp-test_ha-326307_ha-326307-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 "sudo cat /home/docker/cp-test_ha-326307_ha-326307-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 "sudo cat /home/docker/cp-test_ha-326307_ha-326307-m04.txt": exit status 1 (143.501371ms)

                                                
                                                
** stderr ** 
	ssh: "ha-326307-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 \"sudo cat /home/docker/cp-test_ha-326307_ha-326307-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp testdata/cp-test.txt ha-326307-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile865153148/001/cp-test_ha-326307-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m02:/home/docker/cp-test.txt ha-326307:/home/docker/cp-test_ha-326307-m02_ha-326307.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307 "sudo cat /home/docker/cp-test_ha-326307-m02_ha-326307.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m02:/home/docker/cp-test.txt ha-326307-m03:/home/docker/cp-test_ha-326307-m02_ha-326307-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m03 "sudo cat /home/docker/cp-test_ha-326307-m02_ha-326307-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m02:/home/docker/cp-test.txt ha-326307-m04:/home/docker/cp-test_ha-326307-m02_ha-326307-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m02:/home/docker/cp-test.txt ha-326307-m04:/home/docker/cp-test_ha-326307-m02_ha-326307-m04.txt: exit status 1 (151.734715ms)

                                                
                                                
** stderr ** 
	getting host: "ha-326307-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m02:/home/docker/cp-test.txt ha-326307-m04:/home/docker/cp-test_ha-326307-m02_ha-326307-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 "sudo cat /home/docker/cp-test_ha-326307-m02_ha-326307-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 "sudo cat /home/docker/cp-test_ha-326307-m02_ha-326307-m04.txt": exit status 1 (149.651854ms)

                                                
                                                
** stderr ** 
	ssh: "ha-326307-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 \"sudo cat /home/docker/cp-test_ha-326307-m02_ha-326307-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp testdata/cp-test.txt ha-326307-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile865153148/001/cp-test_ha-326307-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307:/home/docker/cp-test_ha-326307-m03_ha-326307.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307 "sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307-m02:/home/docker/cp-test_ha-326307-m03_ha-326307-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m02 "sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307-m04:/home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307-m04:/home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt: exit status 1 (148.696611ms)

                                                
                                                
** stderr ** 
	getting host: "ha-326307-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307-m04:/home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 "sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 "sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt": exit status 1 (145.779782ms)

                                                
                                                
** stderr ** 
	ssh: "ha-326307-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 \"sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp testdata/cp-test.txt ha-326307-m04:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 cp testdata/cp-test.txt ha-326307-m04:/home/docker/cp-test.txt: exit status 1 (149.275471ms)

                                                
                                                
** stderr ** 
	getting host: "ha-326307-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 cp testdata/cp-test.txt ha-326307-m04:/home/docker/cp-test.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (151.316581ms)

                                                
                                                
** stderr ** 
	ssh: "ha-326307-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile865153148/001/cp-test_ha-326307-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile865153148/001/cp-test_ha-326307-m04.txt: exit status 1 (149.385298ms)

                                                
                                                
** stderr ** 
	getting host: "ha-326307-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile865153148/001/cp-test_ha-326307-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (157.504267ms)

                                                
                                                
** stderr ** 
	ssh: "ha-326307-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:545: failed to read test file 'testdata/cp-test.txt' : open /tmp/TestMultiControlPlaneserialCopyFile865153148/001/cp-test_ha-326307-m04.txt: no such file or directory
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307:/home/docker/cp-test_ha-326307-m04_ha-326307.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307:/home/docker/cp-test_ha-326307-m04_ha-326307.txt: exit status 1 (166.944774ms)

                                                
                                                
** stderr ** 
	getting host: "ha-326307-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307:/home/docker/cp-test_ha-326307-m04_ha-326307.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (145.510887ms)

                                                
                                                
** stderr ** 
	ssh: "ha-326307-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307 "sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307 "sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307.txt": exit status 1 (270.175053ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-326307-m04_ha-326307.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307 \"sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-326307-m04_ha-326307.txt: No such file or directory\r\n",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m02:/home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m02:/home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt: exit status 1 (166.584413ms)

                                                
                                                
** stderr ** 
	getting host: "ha-326307-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m02:/home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (143.871114ms)

                                                
                                                
** stderr ** 
	ssh: "ha-326307-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m02 "sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m02 "sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt": exit status 1 (291.909727ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m02 \"sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt: No such file or directory\r\n",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m03:/home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m03:/home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt: exit status 1 (170.225956ms)

                                                
                                                
** stderr ** 
	getting host: "ha-326307-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m03:/home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (143.515246ms)

                                                
                                                
** stderr ** 
	ssh: "ha-326307-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m03 "sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m03 "sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt": exit status 1 (267.41554ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-326307 ssh -n ha-326307-m03 \"sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt: No such file or directory\r\n",
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-326307
helpers_test.go:243: (dbg) docker inspect ha-326307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	        "Created": "2025-09-19T22:23:18.619000062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 69921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:23:18.670514121Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hosts",
	        "LogPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3-json.log",
	        "Name": "/ha-326307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-326307:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-326307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	                "LowerDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-326307",
	                "Source": "/var/lib/docker/volumes/ha-326307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-326307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-326307",
	                "name.minikube.sigs.k8s.io": "ha-326307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b9c61cd0152986e2b265b3cf0a7628b1c049e495ce30493b8e54f6b9446115f",
	            "SandboxKey": "/var/run/docker/netns/8b9c61cd0152",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-326307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:80:09:d2:65:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "465af21e2d8d112a34f14f7c3ab89eac4e6e57f582ff8e32b514381f55dd085e",
	                    "EndpointID": "f35735061c65841c2c1ba7f2859db25885582588fa8f2d14e3a015320f6c3fc4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-326307",
	                        "5e0f1fe86b08"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-326307 -n ha-326307
helpers_test.go:252: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-326307 logs -n 25: (1.274222506s)
helpers_test.go:260: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ cp      │ ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile865153148/001/cp-test_ha-326307-m03.txt │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ cp      │ ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307:/home/docker/cp-test_ha-326307-m03_ha-326307.txt                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307 sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307.txt                                                │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ cp      │ ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307-m02:/home/docker/cp-test_ha-326307-m03_ha-326307-m02.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m02 sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m02.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ cp      │ ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307-m04:/home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp testdata/cp-test.txt ha-326307-m04:/home/docker/cp-test.txt                                                            │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile865153148/001/cp-test_ha-326307-m04.txt │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307:/home/docker/cp-test_ha-326307-m04_ha-326307.txt                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307.txt                                                │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m02:/home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m02 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m03:/home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:23:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:23:13.527478   69358 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:23:13.527574   69358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:13.527579   69358 out.go:374] Setting ErrFile to fd 2...
	I0919 22:23:13.527586   69358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:13.527823   69358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:23:13.528355   69358 out.go:368] Setting JSON to false
	I0919 22:23:13.529260   69358 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3938,"bootTime":1758316656,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:23:13.529345   69358 start.go:140] virtualization: kvm guest
	I0919 22:23:13.531661   69358 out.go:179] * [ha-326307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:23:13.533198   69358 notify.go:220] Checking for updates...
	I0919 22:23:13.533231   69358 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:23:13.534827   69358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:23:13.536340   69358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:23:13.537773   69358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 22:23:13.539372   69358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:23:13.541189   69358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:23:13.542697   69358 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:23:13.568228   69358 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:23:13.568380   69358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:13.622546   69358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:23:13.612893654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:13.622646   69358 docker.go:318] overlay module found
	I0919 22:23:13.624668   69358 out.go:179] * Using the docker driver based on user configuration
	I0919 22:23:13.626116   69358 start.go:304] selected driver: docker
	I0919 22:23:13.626134   69358 start.go:918] validating driver "docker" against <nil>
	I0919 22:23:13.626147   69358 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:23:13.626725   69358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:13.684385   69358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:23:13.672811393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:13.684569   69358 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:23:13.684775   69358 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:23:13.686618   69358 out.go:179] * Using Docker driver with root privileges
	I0919 22:23:13.687924   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:13.688000   69358 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:23:13.688014   69358 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:23:13.688089   69358 start.go:348] cluster config:
	{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m
0s}
	I0919 22:23:13.689601   69358 out.go:179] * Starting "ha-326307" primary control-plane node in "ha-326307" cluster
	I0919 22:23:13.691305   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:23:13.692823   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:23:13.694304   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:13.694378   69358 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 22:23:13.694398   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:23:13.694426   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:23:13.694515   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:23:13.694533   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:23:13.694981   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:13.695014   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json: {Name:mk9e3af266bcfbabd18624d7d22535c6f1841e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:13.716737   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:23:13.716759   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:23:13.716776   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:23:13.716797   69358 start.go:360] acquireMachinesLock for ha-326307: {Name:mk42b79b90944aab63c8b37c2f94e04ca1ebec1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:23:13.716893   69358 start.go:364] duration metric: took 80.537µs to acquireMachinesLock for "ha-326307"
	I0919 22:23:13.716915   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:13.716974   69358 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:23:13.719062   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:23:13.719317   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:23:13.719352   69358 client.go:168] LocalClient.Create starting
	I0919 22:23:13.719447   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:23:13.719502   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:13.719517   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:13.719580   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:23:13.719600   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:13.719610   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:13.719933   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:23:13.737609   69358 cli_runner.go:211] docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:23:13.737699   69358 network_create.go:284] running [docker network inspect ha-326307] to gather additional debugging logs...
	I0919 22:23:13.737725   69358 cli_runner.go:164] Run: docker network inspect ha-326307
	W0919 22:23:13.755400   69358 cli_runner.go:211] docker network inspect ha-326307 returned with exit code 1
	I0919 22:23:13.755437   69358 network_create.go:287] error running [docker network inspect ha-326307]: docker network inspect ha-326307: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-326307 not found
	I0919 22:23:13.755455   69358 network_create.go:289] output of [docker network inspect ha-326307]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-326307 not found
	
	** /stderr **
	I0919 22:23:13.755563   69358 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:13.774541   69358 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018eb270}
	I0919 22:23:13.774578   69358 network_create.go:124] attempt to create docker network ha-326307 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:23:13.774619   69358 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-326307 ha-326307
	I0919 22:23:13.834699   69358 network_create.go:108] docker network ha-326307 192.168.49.0/24 created
	I0919 22:23:13.834730   69358 kic.go:121] calculated static IP "192.168.49.2" for the "ha-326307" container
	I0919 22:23:13.834799   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:23:13.852316   69358 cli_runner.go:164] Run: docker volume create ha-326307 --label name.minikube.sigs.k8s.io=ha-326307 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:23:13.872969   69358 oci.go:103] Successfully created a docker volume ha-326307
	I0919 22:23:13.873115   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307 --entrypoint /usr/bin/test -v ha-326307:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:23:14.277718   69358 oci.go:107] Successfully prepared a docker volume ha-326307
	I0919 22:23:14.277762   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:14.277789   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:23:14.277852   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:23:18.547851   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.269954037s)
	I0919 22:23:18.547886   69358 kic.go:203] duration metric: took 4.270092787s to extract preloaded images to volume ...
	W0919 22:23:18.548002   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:23:18.548044   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:23:18.548091   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:23:18.602395   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307 --name ha-326307 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307 --network ha-326307 --ip 192.168.49.2 --volume ha-326307:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:23:18.902433   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Running}}
	I0919 22:23:18.923488   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:18.945324   69358 cli_runner.go:164] Run: docker exec ha-326307 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:23:18.998198   69358 oci.go:144] the created container "ha-326307" has a running status.
	I0919 22:23:18.998254   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa...
	I0919 22:23:19.305578   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:23:19.305639   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:23:19.338987   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:19.361057   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:23:19.361077   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:23:19.423644   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:19.446710   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:23:19.446815   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.468914   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.469178   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.469194   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:23:19.609654   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:23:19.609685   69358 ubuntu.go:182] provisioning hostname "ha-326307"
	I0919 22:23:19.609806   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.631352   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.631769   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.631790   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307 && echo "ha-326307" | sudo tee /etc/hostname
	I0919 22:23:19.783770   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:23:19.783868   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.802757   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.802967   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.802990   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:23:19.942778   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:23:19.942811   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:23:19.942925   69358 ubuntu.go:190] setting up certificates
	I0919 22:23:19.942949   69358 provision.go:84] configureAuth start
	I0919 22:23:19.943010   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:19.963444   69358 provision.go:143] copyHostCerts
	I0919 22:23:19.963491   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:19.963531   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:23:19.963541   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:19.963629   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:23:19.963778   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:19.963807   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:23:19.963811   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:19.963862   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:23:19.963997   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:19.964030   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:23:19.964040   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:19.964080   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:23:19.964187   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307 san=[127.0.0.1 192.168.49.2 ha-326307 localhost minikube]
	I0919 22:23:20.747311   69358 provision.go:177] copyRemoteCerts
	I0919 22:23:20.747377   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:23:20.747410   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:20.766468   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:20.866991   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:23:20.867057   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:23:20.897799   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:23:20.897858   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:23:20.925953   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:23:20.926026   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:23:20.954845   69358 provision.go:87] duration metric: took 1.011880735s to configureAuth
	I0919 22:23:20.954872   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:23:20.955074   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:20.955089   69358 machine.go:96] duration metric: took 1.508356629s to provisionDockerMachine
	I0919 22:23:20.955096   69358 client.go:171] duration metric: took 7.235738314s to LocalClient.Create
	I0919 22:23:20.955122   69358 start.go:167] duration metric: took 7.235806728s to libmachine.API.Create "ha-326307"
	I0919 22:23:20.955128   69358 start.go:293] postStartSetup for "ha-326307" (driver="docker")
	I0919 22:23:20.955136   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:23:20.955224   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:23:20.955259   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:20.975767   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.077921   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:23:21.081820   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:23:21.081872   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:23:21.081881   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:23:21.081888   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:23:21.081901   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:23:21.081973   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:23:21.082057   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:23:21.082071   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:23:21.082204   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:23:21.092245   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:21.123732   69358 start.go:296] duration metric: took 168.590139ms for postStartSetup
	I0919 22:23:21.124127   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:21.143109   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:21.143414   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:23:21.143466   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.162970   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.258062   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:23:21.263437   69358 start.go:128] duration metric: took 7.546444684s to createHost
	I0919 22:23:21.263491   69358 start.go:83] releasing machines lock for "ha-326307", held for 7.546570423s
	I0919 22:23:21.263561   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:21.282251   69358 ssh_runner.go:195] Run: cat /version.json
	I0919 22:23:21.282309   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.282391   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:23:21.282539   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.302076   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.302858   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.477003   69358 ssh_runner.go:195] Run: systemctl --version
	I0919 22:23:21.481946   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:23:21.486736   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:23:21.519470   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:23:21.519573   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:23:21.549703   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:23:21.549736   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:23:21.549772   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:23:21.549813   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:23:21.563897   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:23:21.577043   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:23:21.577104   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:23:21.591898   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:23:21.607905   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:23:21.677531   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:23:21.749223   69358 docker.go:234] disabling docker service ...
	I0919 22:23:21.749348   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:23:21.771648   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:23:21.786268   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:23:21.864247   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:23:21.930620   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:23:21.943680   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:23:21.963319   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:23:21.977473   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:23:21.989630   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:23:21.989705   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:23:22.001778   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:22.013415   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:23:22.024683   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:22.036042   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:23:22.047238   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:23:22.060239   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:23:22.074324   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:23:22.087081   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:23:22.099883   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:23:22.110348   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:22.180253   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:23:22.295748   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:23:22.295832   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:23:22.300535   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:23:22.300597   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:23:22.304676   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:23:22.344790   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:23:22.344850   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:22.371338   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:22.400934   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:23:22.402669   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:22.421952   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:23:22.426523   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:22.442415   69358 kubeadm.go:875] updating cluster {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:23:22.442712   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:22.442823   69358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:23:22.482684   69358 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:23:22.482710   69358 containerd.go:534] Images already preloaded, skipping extraction
	I0919 22:23:22.482762   69358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:23:22.518500   69358 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:23:22.518526   69358 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:23:22.518533   69358 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0919 22:23:22.518616   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:23:22.518668   69358 ssh_runner.go:195] Run: sudo crictl info
	I0919 22:23:22.554956   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:22.554993   69358 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:23:22.555004   69358 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:23:22.555029   69358 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-326307 NodeName:ha-326307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:23:22.555176   69358 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-326307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:23:22.555209   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:23:22.555273   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:23:22.568901   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:23:22.569038   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:23:22.569091   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:23:22.580223   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:23:22.580317   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:23:22.591268   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 22:23:22.612688   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:23:22.636770   69358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0919 22:23:22.658657   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:23:22.681384   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:23:22.685531   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:22.698340   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:22.769217   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:23:22.792280   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.2
	I0919 22:23:22.792300   69358 certs.go:194] generating shared ca certs ...
	I0919 22:23:22.792315   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.792509   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:23:22.792553   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:23:22.792563   69358 certs.go:256] generating profile certs ...
	I0919 22:23:22.792630   69358 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:23:22.792643   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt with IP's: []
	I0919 22:23:22.975725   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt ...
	I0919 22:23:22.975759   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt: {Name:mk32bca88dd6748516774b56251f96e4fc38a69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.975973   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key ...
	I0919 22:23:22.975990   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key: {Name:mkc0e836c004e527dbd2787dc00463a0715cf8a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.976108   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226
	I0919 22:23:22.976125   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:23:23.460427   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 ...
	I0919 22:23:23.460460   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226: {Name:mk98859e0e43a6d4b4da591dc89695908954cc81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.460672   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226 ...
	I0919 22:23:23.460693   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226: {Name:mk3473c1668aec72ec5a5598645b70e29415cdd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.460941   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:23:23.461078   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:23:23.461207   69358 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:23:23.461233   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt with IP's: []
	I0919 22:23:23.489621   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt ...
	I0919 22:23:23.489652   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt: {Name:mk06f3b4cfde33781bd7076ead00f94525257452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.489837   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key ...
	I0919 22:23:23.489860   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key: {Name:mk632a617a99ac85bf5a9b022d1173caf8e7b208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.489978   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:23:23.490003   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:23:23.490018   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:23:23.490034   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:23:23.490051   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:23:23.490069   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:23:23.490087   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:23:23.490100   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:23:23.490185   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:23:23.490228   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:23:23.490238   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:23:23.490273   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:23:23.490304   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:23:23.490333   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:23:23.490390   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:23.490435   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.490455   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.490497   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.491033   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:23:23.517815   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:23:23.544857   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:23:23.571386   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:23:23.600966   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:23:23.629855   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:23:23.657907   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:23:23.685564   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:23:23.713503   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:23:23.745344   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:23:23.774311   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:23:23.807603   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:23:23.832523   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:23:23.839649   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:23:23.851364   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.856325   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.856396   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.864469   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:23:23.876649   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:23:23.888129   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.892889   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.892949   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.901167   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:23:23.912487   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:23:23.924831   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.929289   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.929357   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.937110   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:23:23.948517   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:23:23.952948   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:23:23.953011   69358 kubeadm.go:392] StartCluster: {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:23.953080   69358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 22:23:23.953122   69358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:23:23.991138   69358 cri.go:89] found id: ""
	I0919 22:23:23.991247   69358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:23:24.003111   69358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:23:24.013643   69358 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:23:24.013714   69358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:23:24.024557   69358 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:23:24.024576   69358 kubeadm.go:157] found existing configuration files:
	
	I0919 22:23:24.024633   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:23:24.035252   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:23:24.035322   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:23:24.045590   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:23:24.056529   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:23:24.056590   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:23:24.066716   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:23:24.077570   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:23:24.077653   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:23:24.088177   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:23:24.098372   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:23:24.098426   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:23:24.108265   69358 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:23:24.149643   69358 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:23:24.149730   69358 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:23:24.166048   69358 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:23:24.166117   69358 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:23:24.166172   69358 kubeadm.go:310] OS: Linux
	I0919 22:23:24.166213   69358 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:23:24.166275   69358 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:23:24.166357   69358 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:23:24.166446   69358 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:23:24.166536   69358 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:23:24.166608   69358 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:23:24.166683   69358 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:23:24.166760   69358 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:23:24.230351   69358 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:23:24.230487   69358 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:23:24.230602   69358 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:23:24.238806   69358 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:23:24.243498   69358 out.go:252]   - Generating certificates and keys ...
	I0919 22:23:24.243610   69358 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:23:24.243715   69358 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:23:24.335199   69358 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:23:24.361175   69358 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:23:24.769077   69358 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:23:25.053293   69358 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:23:25.392067   69358 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:23:25.392251   69358 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-326307 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:23:25.629558   69358 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:23:25.629706   69358 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-326307 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:23:26.141828   69358 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:23:26.343650   69358 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:23:26.737207   69358 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:23:26.737292   69358 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:23:27.020543   69358 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:23:27.208963   69358 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:23:27.382044   69358 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:23:27.660395   69358 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:23:27.867964   69358 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:23:27.868475   69358 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:23:27.870857   69358 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:23:27.873408   69358 out.go:252]   - Booting up control plane ...
	I0919 22:23:27.873545   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:23:27.873665   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:23:27.873811   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:23:27.884709   69358 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:23:27.884874   69358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:23:27.892815   69358 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:23:27.893043   69358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:23:27.893108   69358 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:23:27.981591   69358 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:23:27.981772   69358 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:23:29.484085   69358 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501867716s
	I0919 22:23:29.488057   69358 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:23:29.488269   69358 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:23:29.488401   69358 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:23:29.488636   69358 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:23:31.058022   69358 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.569932465s
	I0919 22:23:31.762139   69358 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.27419796s
	I0919 22:23:33.991284   69358 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.503282233s
	I0919 22:23:34.005767   69358 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:23:34.017935   69358 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:23:34.032336   69358 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:23:34.032534   69358 kubeadm.go:310] [mark-control-plane] Marking the node ha-326307 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:23:34.042496   69358 kubeadm.go:310] [bootstrap-token] Using token: ym5hq4.pw1tvtip1io4ljbf
	I0919 22:23:34.044381   69358 out.go:252]   - Configuring RBAC rules ...
	I0919 22:23:34.044558   69358 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:23:34.048649   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:23:34.057509   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:23:34.061297   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:23:34.064926   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:23:34.069534   69358 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:23:34.399239   69358 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:23:34.818126   69358 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:23:35.398001   69358 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:23:35.398907   69358 kubeadm.go:310] 
	I0919 22:23:35.399007   69358 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:23:35.399035   69358 kubeadm.go:310] 
	I0919 22:23:35.399120   69358 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:23:35.399149   69358 kubeadm.go:310] 
	I0919 22:23:35.399207   69358 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:23:35.399301   69358 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:23:35.399350   69358 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:23:35.399356   69358 kubeadm.go:310] 
	I0919 22:23:35.399402   69358 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:23:35.399408   69358 kubeadm.go:310] 
	I0919 22:23:35.399470   69358 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:23:35.399481   69358 kubeadm.go:310] 
	I0919 22:23:35.399554   69358 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:23:35.399644   69358 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:23:35.399706   69358 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:23:35.399712   69358 kubeadm.go:310] 
	I0919 22:23:35.399803   69358 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:23:35.399888   69358 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:23:35.399892   69358 kubeadm.go:310] 
	I0919 22:23:35.399971   69358 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ym5hq4.pw1tvtip1io4ljbf \
	I0919 22:23:35.400068   69358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 \
	I0919 22:23:35.400089   69358 kubeadm.go:310] 	--control-plane 
	I0919 22:23:35.400093   69358 kubeadm.go:310] 
	I0919 22:23:35.400204   69358 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:23:35.400217   69358 kubeadm.go:310] 
	I0919 22:23:35.400285   69358 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ym5hq4.pw1tvtip1io4ljbf \
	I0919 22:23:35.400382   69358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 
	I0919 22:23:35.403119   69358 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:23:35.403274   69358 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:23:35.403305   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:35.403317   69358 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:23:35.407302   69358 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:23:35.409983   69358 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:23:35.415011   69358 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:23:35.415039   69358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:23:35.436210   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:23:35.679694   69358 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:23:35.679756   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:35.679779   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307 minikube.k8s.io/updated_at=2025_09_19T22_23_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=true
	I0919 22:23:35.787076   69358 ops.go:34] apiserver oom_adj: -16
	I0919 22:23:35.787237   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:36.287327   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:36.787300   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:37.287415   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:37.788066   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:38.287401   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:38.787731   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.288028   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.788301   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.864456   69358 kubeadm.go:1105] duration metric: took 4.184765822s to wait for elevateKubeSystemPrivileges
	I0919 22:23:39.864500   69358 kubeadm.go:394] duration metric: took 15.911493151s to StartCluster
	I0919 22:23:39.864524   69358 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:39.864601   69358 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:23:39.865911   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:39.866255   69358 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:39.866275   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:23:39.866288   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:23:39.866297   69358 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:23:39.866377   69358 addons.go:69] Setting storage-provisioner=true in profile "ha-326307"
	I0919 22:23:39.866398   69358 addons.go:238] Setting addon storage-provisioner=true in "ha-326307"
	I0919 22:23:39.866400   69358 addons.go:69] Setting default-storageclass=true in profile "ha-326307"
	I0919 22:23:39.866428   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:39.866523   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:39.866434   69358 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-326307"
	I0919 22:23:39.866921   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.867012   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.892851   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:23:39.893863   69358 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:23:39.893944   69358 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:23:39.893953   69358 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:23:39.894002   69358 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:23:39.894061   69358 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:23:39.893888   69358 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:23:39.894642   69358 addons.go:238] Setting addon default-storageclass=true in "ha-326307"
	I0919 22:23:39.894691   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:39.895196   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.895724   69358 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:23:39.897293   69358 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:23:39.897315   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:23:39.897386   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:39.923915   69358 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:23:39.923939   69358 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:23:39.924001   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:39.926323   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:39.953300   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:39.968501   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:23:40.065441   69358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:23:40.083647   69358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:23:40.190461   69358 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:23:40.433561   69358 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:23:40.435567   69358 addons.go:514] duration metric: took 569.25898ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:23:40.435633   69358 start.go:246] waiting for cluster config update ...
	I0919 22:23:40.435651   69358 start.go:255] writing updated cluster config ...
	I0919 22:23:40.437510   69358 out.go:203] 
	I0919 22:23:40.439070   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:40.439141   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:40.441238   69358 out.go:179] * Starting "ha-326307-m02" control-plane node in "ha-326307" cluster
	I0919 22:23:40.443382   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:23:40.445749   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:23:40.447079   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:40.447132   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:23:40.447229   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:23:40.447308   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:23:40.447326   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:23:40.447427   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:40.470325   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:23:40.470347   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:23:40.470366   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:23:40.470391   69358 start.go:360] acquireMachinesLock for ha-326307-m02: {Name:mk4919a9b19250804b0f53d01bcd11efaf9a431f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:23:40.470518   69358 start.go:364] duration metric: took 88.309µs to acquireMachinesLock for "ha-326307-m02"
	I0919 22:23:40.470552   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:40.470618   69358 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:23:40.473495   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:23:40.473607   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:23:40.473631   69358 client.go:168] LocalClient.Create starting
	I0919 22:23:40.473689   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:23:40.473724   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:40.473734   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:40.473828   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:23:40.473853   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:40.473861   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:40.474095   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:40.493916   69358 network_create.go:77] Found existing network {name:ha-326307 subnet:0xc000ad7620 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:23:40.493972   69358 kic.go:121] calculated static IP "192.168.49.3" for the "ha-326307-m02" container
	I0919 22:23:40.494055   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:23:40.516112   69358 cli_runner.go:164] Run: docker volume create ha-326307-m02 --label name.minikube.sigs.k8s.io=ha-326307-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:23:40.537046   69358 oci.go:103] Successfully created a docker volume ha-326307-m02
	I0919 22:23:40.537137   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m02 --entrypoint /usr/bin/test -v ha-326307-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:23:40.991997   69358 oci.go:107] Successfully prepared a docker volume ha-326307-m02
	I0919 22:23:40.992038   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:40.992061   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:23:40.992121   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:23:45.362629   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.370467998s)
	I0919 22:23:45.362666   69358 kic.go:203] duration metric: took 4.370603938s to extract preloaded images to volume ...
	W0919 22:23:45.362777   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:23:45.362811   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:23:45.362846   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:23:45.417833   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307-m02 --name ha-326307-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307-m02 --network ha-326307 --ip 192.168.49.3 --volume ha-326307-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:23:45.744363   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Running}}
	I0919 22:23:45.768456   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:45.789293   69358 cli_runner.go:164] Run: docker exec ha-326307-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:23:45.846760   69358 oci.go:144] the created container "ha-326307-m02" has a running status.
	I0919 22:23:45.846794   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa...
	I0919 22:23:46.005236   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:23:46.005288   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:23:46.042640   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:46.067424   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:23:46.067455   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:23:46.132729   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:46.155854   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:23:46.155967   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.177181   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.177511   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.177533   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:23:46.320054   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:23:46.320089   69358 ubuntu.go:182] provisioning hostname "ha-326307-m02"
	I0919 22:23:46.320185   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.341740   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.341951   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.341965   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m02 && echo "ha-326307-m02" | sudo tee /etc/hostname
	I0919 22:23:46.497123   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:23:46.497234   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.520214   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.520436   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.520455   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:23:46.659417   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:23:46.659458   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:23:46.659492   69358 ubuntu.go:190] setting up certificates
	I0919 22:23:46.659505   69358 provision.go:84] configureAuth start
	I0919 22:23:46.659556   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:46.679498   69358 provision.go:143] copyHostCerts
	I0919 22:23:46.679551   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:46.679598   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:23:46.679605   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:46.679712   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:23:46.679851   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:46.679882   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:23:46.679893   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:46.679947   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:23:46.680043   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:46.680141   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:23:46.680185   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:46.680251   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:23:46.680367   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m02 san=[127.0.0.1 192.168.49.3 ha-326307-m02 localhost minikube]
	I0919 22:23:46.869190   69358 provision.go:177] copyRemoteCerts
	I0919 22:23:46.869251   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:23:46.869285   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.888798   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:46.988385   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:23:46.988452   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:23:47.018227   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:23:47.018299   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:23:47.046810   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:23:47.046866   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:23:47.074372   69358 provision.go:87] duration metric: took 414.855982ms to configureAuth
	I0919 22:23:47.074400   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:23:47.074581   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:47.074598   69358 machine.go:96] duration metric: took 918.712366ms to provisionDockerMachine
	I0919 22:23:47.074607   69358 client.go:171] duration metric: took 6.600969352s to LocalClient.Create
	I0919 22:23:47.074631   69358 start.go:167] duration metric: took 6.601023702s to libmachine.API.Create "ha-326307"
	I0919 22:23:47.074642   69358 start.go:293] postStartSetup for "ha-326307-m02" (driver="docker")
	I0919 22:23:47.074650   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:23:47.074721   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:23:47.074767   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.094538   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.195213   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:23:47.199088   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:23:47.199139   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:23:47.199181   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:23:47.199191   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:23:47.199215   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:23:47.199276   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:23:47.199378   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:23:47.199394   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:23:47.199502   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:23:47.209642   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:47.240945   69358 start.go:296] duration metric: took 166.288086ms for postStartSetup
	I0919 22:23:47.241383   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:47.261061   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:47.261460   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:23:47.261513   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.280359   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.374609   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:23:47.379255   69358 start.go:128] duration metric: took 6.908623332s to createHost
	I0919 22:23:47.379283   69358 start.go:83] releasing machines lock for "ha-326307-m02", held for 6.908753842s
	I0919 22:23:47.379346   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:47.400418   69358 out.go:179] * Found network options:
	I0919 22:23:47.401854   69358 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:23:47.403072   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:23:47.403133   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:23:47.403263   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:23:47.403266   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:23:47.403326   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.403332   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.423928   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.424218   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.597529   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:23:47.630263   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:23:47.630334   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:23:47.661706   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:23:47.661733   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:23:47.661772   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:23:47.661826   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:23:47.675485   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:23:47.687726   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:23:47.687780   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:23:47.701818   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:23:47.717912   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:23:47.789825   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:23:47.863188   69358 docker.go:234] disabling docker service ...
	I0919 22:23:47.863267   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:23:47.881757   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:23:47.893830   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:23:47.963004   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:23:48.034120   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:23:48.046843   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:23:48.065279   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:23:48.078269   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:23:48.089105   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:23:48.089186   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:23:48.099867   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:48.111076   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:23:48.122049   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:48.132648   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:23:48.142263   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:23:48.152876   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:23:48.163459   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:23:48.174096   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:23:48.183483   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:23:48.192780   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:48.261004   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:23:48.364434   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:23:48.364508   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:23:48.368726   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:23:48.368792   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:23:48.372683   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:23:48.409110   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:23:48.409200   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:48.433389   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:48.460529   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:23:48.462207   69358 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:23:48.464087   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:48.482217   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:23:48.486620   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:48.498806   69358 mustload.go:65] Loading cluster: ha-326307
	I0919 22:23:48.499032   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:48.499315   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:48.518576   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:48.518850   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.3
	I0919 22:23:48.518866   69358 certs.go:194] generating shared ca certs ...
	I0919 22:23:48.518885   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.519012   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:23:48.519082   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:23:48.519096   69358 certs.go:256] generating profile certs ...
	I0919 22:23:48.519222   69358 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:23:48.519259   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4
	I0919 22:23:48.519288   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:23:48.963393   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 ...
	I0919 22:23:48.963428   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4: {Name:mk381f64cc0991e3a6417e9586b9565eb7a8dbf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.963635   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4 ...
	I0919 22:23:48.963660   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4: {Name:mk4dbead0b9c36c7a3635520729a1eb2d4b33f13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.963762   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:23:48.963935   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:23:48.964103   69358 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:23:48.964120   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:23:48.964138   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:23:48.964166   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:23:48.964183   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:23:48.964200   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:23:48.964218   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:23:48.964234   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:23:48.964251   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:23:48.964313   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:23:48.964355   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:23:48.964366   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:23:48.964406   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:23:48.964438   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:23:48.964471   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:23:48.964528   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:48.964570   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:23:48.964592   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:23:48.964612   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:48.964731   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:48.983907   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:49.073692   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:23:49.078819   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:23:49.094234   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:23:49.099593   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:23:49.113663   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:23:49.117744   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:23:49.133048   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:23:49.136861   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:23:49.150734   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:23:49.154901   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:23:49.169388   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:23:49.173566   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:23:49.188070   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:23:49.215594   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:23:49.243561   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:23:49.271624   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:23:49.301814   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:23:49.332556   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:23:49.360723   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:23:49.388872   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:23:49.417316   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:23:49.448722   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:23:49.476877   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:23:49.504914   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:23:49.524969   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:23:49.544942   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:23:49.564506   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:23:49.584887   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:23:49.605725   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:23:49.625552   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:23:49.645811   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:23:49.652062   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:23:49.664544   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.668823   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.668889   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.676892   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:23:49.688737   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:23:49.699741   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.703762   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.703823   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.711311   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:23:49.721987   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:23:49.732874   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.737289   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.737351   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.745312   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:23:49.756384   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:23:49.760242   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:23:49.760315   69358 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0919 22:23:49.760415   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:23:49.760438   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:23:49.760476   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:23:49.773427   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:23:49.773499   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:23:49.773549   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:23:49.784237   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:23:49.784306   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:23:49.794534   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:23:49.814529   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:23:49.837846   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:23:49.859421   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:23:49.863859   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:49.876721   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:49.948089   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:23:49.971010   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:49.971327   69358 start.go:317] joinCluster: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:49.971508   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:23:49.971618   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:49.992535   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:50.137695   69358 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:50.137740   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kb90tj.om7zof6htice1y8z --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:24:08.633363   69358 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kb90tj.om7zof6htice1y8z --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.495537277s)
	I0919 22:24:08.633404   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:24:08.849981   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307-m02 minikube.k8s.io/updated_at=2025_09_19T22_24_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=false
	I0919 22:24:08.928109   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-326307-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:24:09.011507   69358 start.go:319] duration metric: took 19.040175049s to joinCluster
	I0919 22:24:09.011590   69358 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:09.011816   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:09.013756   69358 out.go:179] * Verifying Kubernetes components...
	I0919 22:24:09.015232   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:09.115618   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:09.130578   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:24:09.130645   69358 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:24:09.130869   69358 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m02" to be "Ready" ...
	W0919 22:24:11.134373   69358 node_ready.go:57] node "ha-326307-m02" has "Ready":"False" status (will retry)
	I0919 22:24:11.634655   69358 node_ready.go:49] node "ha-326307-m02" is "Ready"
	I0919 22:24:11.634683   69358 node_ready.go:38] duration metric: took 2.503796185s for node "ha-326307-m02" to be "Ready" ...
	I0919 22:24:11.634697   69358 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:24:11.634751   69358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:24:11.647782   69358 api_server.go:72] duration metric: took 2.636155477s to wait for apiserver process to appear ...
	I0919 22:24:11.647812   69358 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:24:11.647848   69358 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:24:11.652005   69358 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:24:11.652952   69358 api_server.go:141] control plane version: v1.34.0
	I0919 22:24:11.652975   69358 api_server.go:131] duration metric: took 5.15649ms to wait for apiserver health ...
	I0919 22:24:11.652984   69358 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:24:11.657535   69358 system_pods.go:59] 17 kube-system pods found
	I0919 22:24:11.657569   69358 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:11.657577   69358 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:11.657581   69358 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:11.657586   69358 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Pending
	I0919 22:24:11.657591   69358 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:11.657598   69358 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Pending: PodScheduled:SchedulerError (pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed)
	I0919 22:24:11.657604   69358 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:11.657609   69358 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Pending
	I0919 22:24:11.657616   69358 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:11.657621   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Pending
	I0919 22:24:11.657626   69358 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:11.657636   69358 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q8mtj": pod kube-proxy-q8mtj is already assigned to node "ha-326307-m02")
	I0919 22:24:11.657642   69358 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:11.657649   69358 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Pending
	I0919 22:24:11.657654   69358 system_pods.go:61] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:11.657660   69358 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Pending
	I0919 22:24:11.657665   69358 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:11.657673   69358 system_pods.go:74] duration metric: took 4.68298ms to wait for pod list to return data ...
	I0919 22:24:11.657687   69358 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:24:11.660430   69358 default_sa.go:45] found service account: "default"
	I0919 22:24:11.660456   69358 default_sa.go:55] duration metric: took 2.762581ms for default service account to be created ...
	I0919 22:24:11.660467   69358 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:24:11.664515   69358 system_pods.go:86] 17 kube-system pods found
	I0919 22:24:11.664549   69358 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:11.664557   69358 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:11.664563   69358 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:11.664567   69358 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Pending
	I0919 22:24:11.664574   69358 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:11.664583   69358 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Pending: PodScheduled:SchedulerError (pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed)
	I0919 22:24:11.664590   69358 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:11.664594   69358 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Pending
	I0919 22:24:11.664600   69358 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:11.664606   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Pending
	I0919 22:24:11.664615   69358 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:11.664623   69358 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q8mtj": pod kube-proxy-q8mtj is already assigned to node "ha-326307-m02")
	I0919 22:24:11.664629   69358 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:11.664637   69358 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Pending
	I0919 22:24:11.664643   69358 system_pods.go:89] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:11.664649   69358 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Pending
	I0919 22:24:11.664653   69358 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:11.664663   69358 system_pods.go:126] duration metric: took 4.189005ms to wait for k8s-apps to be running ...
	I0919 22:24:11.664676   69358 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:24:11.664734   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:24:11.677679   69358 system_svc.go:56] duration metric: took 12.991783ms WaitForService to wait for kubelet
	I0919 22:24:11.677718   69358 kubeadm.go:578] duration metric: took 2.666095008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:11.677741   69358 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:24:11.681219   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:11.681249   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:11.681276   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:11.681282   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:11.681288   69358 node_conditions.go:105] duration metric: took 3.540774ms to run NodePressure ...
	I0919 22:24:11.681302   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:24:11.681336   69358 start.go:255] writing updated cluster config ...
	I0919 22:24:11.683465   69358 out.go:203] 
	I0919 22:24:11.685336   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:11.685480   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:11.687190   69358 out.go:179] * Starting "ha-326307-m03" control-plane node in "ha-326307" cluster
	I0919 22:24:11.688774   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:24:11.690230   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:11.691529   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:24:11.691564   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:11.691570   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:11.691776   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:11.691792   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:24:11.691940   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:11.714494   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:11.714516   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:11.714538   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:11.714564   69358 start.go:360] acquireMachinesLock for ha-326307-m03: {Name:mk07818636650a6efffb19d787e84a34d6f1dd98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:11.714717   69358 start.go:364] duration metric: took 129.412µs to acquireMachinesLock for "ha-326307-m03"
	I0919 22:24:11.714749   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:11.714883   69358 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:24:11.717146   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:11.717288   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:24:11.717325   69358 client.go:168] LocalClient.Create starting
	I0919 22:24:11.717396   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:24:11.717429   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:11.717444   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:11.717499   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:24:11.717523   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:11.717531   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:11.717757   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:11.736709   69358 network_create.go:77] Found existing network {name:ha-326307 subnet:0xc001c6a9f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:11.736749   69358 kic.go:121] calculated static IP "192.168.49.4" for the "ha-326307-m03" container
	I0919 22:24:11.736838   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:11.757855   69358 cli_runner.go:164] Run: docker volume create ha-326307-m03 --label name.minikube.sigs.k8s.io=ha-326307-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:11.780198   69358 oci.go:103] Successfully created a docker volume ha-326307-m03
	I0919 22:24:11.780287   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m03 --entrypoint /usr/bin/test -v ha-326307-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:12.269719   69358 oci.go:107] Successfully prepared a docker volume ha-326307-m03
	I0919 22:24:12.269772   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:24:12.269795   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:12.269864   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:16.658999   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.389088771s)
	I0919 22:24:16.659030   69358 kic.go:203] duration metric: took 4.389232064s to extract preloaded images to volume ...
	W0919 22:24:16.659114   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:16.659151   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:16.659211   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:16.714324   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307-m03 --name ha-326307-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307-m03 --network ha-326307 --ip 192.168.49.4 --volume ha-326307-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:17.029039   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Running}}
	I0919 22:24:17.050534   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.070017   69358 cli_runner.go:164] Run: docker exec ha-326307-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:17.125252   69358 oci.go:144] the created container "ha-326307-m03" has a running status.
	I0919 22:24:17.125293   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa...
	I0919 22:24:17.618351   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:17.618395   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:17.646956   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.667176   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:17.667203   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:17.713667   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.734276   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:17.734370   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:17.755726   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:17.755941   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:17.755953   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:17.894482   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:24:17.894512   69358 ubuntu.go:182] provisioning hostname "ha-326307-m03"
	I0919 22:24:17.894572   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:17.914204   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:17.914507   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:17.914530   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m03 && echo "ha-326307-m03" | sudo tee /etc/hostname
	I0919 22:24:18.068724   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:24:18.068805   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.088244   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:18.088504   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:18.088525   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:18.227353   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:18.227390   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:24:18.227421   69358 ubuntu.go:190] setting up certificates
	I0919 22:24:18.227433   69358 provision.go:84] configureAuth start
	I0919 22:24:18.227496   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.247948   69358 provision.go:143] copyHostCerts
	I0919 22:24:18.247989   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:24:18.248023   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:24:18.248029   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:24:18.248096   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:24:18.248231   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:24:18.248289   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:24:18.248299   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:24:18.248338   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:24:18.248404   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:24:18.248423   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:24:18.248427   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:24:18.248457   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:24:18.248512   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m03 san=[127.0.0.1 192.168.49.4 ha-326307-m03 localhost minikube]
	I0919 22:24:18.393257   69358 provision.go:177] copyRemoteCerts
	I0919 22:24:18.393319   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:18.393353   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.412748   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.514005   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:18.514092   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:24:18.542657   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:18.542733   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:18.569691   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:18.569759   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:18.596329   69358 provision.go:87] duration metric: took 368.876183ms to configureAuth
	I0919 22:24:18.596357   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:18.596551   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:18.596562   69358 machine.go:96] duration metric: took 862.263986ms to provisionDockerMachine
	I0919 22:24:18.596567   69358 client.go:171] duration metric: took 6.879237415s to LocalClient.Create
	I0919 22:24:18.596586   69358 start.go:167] duration metric: took 6.879300568s to libmachine.API.Create "ha-326307"
	I0919 22:24:18.596594   69358 start.go:293] postStartSetup for "ha-326307-m03" (driver="docker")
	I0919 22:24:18.596602   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:18.596644   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:18.596677   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.615349   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.717907   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:18.722093   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:18.722137   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:18.722150   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:18.722173   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:18.722186   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:24:18.722248   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:24:18.722356   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:24:18.722372   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:24:18.722580   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:18.732899   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:24:18.766453   69358 start.go:296] duration metric: took 169.843532ms for postStartSetup
	I0919 22:24:18.766899   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.786322   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:18.786775   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:18.786833   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.806377   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.901798   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:18.907121   69358 start.go:128] duration metric: took 7.192223106s to createHost
	I0919 22:24:18.907180   69358 start.go:83] releasing machines lock for "ha-326307-m03", held for 7.192445142s
	I0919 22:24:18.907266   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.929545   69358 out.go:179] * Found network options:
	I0919 22:24:18.931020   69358 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:24:18.932299   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932334   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932375   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932396   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:18.932501   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:18.932558   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.932588   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:18.932662   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.952990   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.953400   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:19.131622   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:19.165991   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:19.166079   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:19.197850   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:19.197878   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:24:19.197909   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:19.197960   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:24:19.211538   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:19.223959   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:24:19.224009   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:24:19.239088   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:24:19.254102   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:24:19.328965   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:24:19.406808   69358 docker.go:234] disabling docker service ...
	I0919 22:24:19.406888   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:24:19.425948   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:24:19.438801   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:24:19.510941   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:24:19.581470   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:19.594683   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:19.613666   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:19.627192   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:19.638603   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:19.638668   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:19.649965   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:19.661530   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:19.673111   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:19.684782   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:19.696056   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:19.707630   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:19.719687   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:19.731477   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:19.741738   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:19.751963   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:19.822277   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:19.931918   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:24:19.931995   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:24:19.936531   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:24:19.936591   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:24:19.940632   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:19.977944   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:24:19.978013   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:24:20.003290   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:24:20.032714   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:24:20.034190   69358 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:20.035560   69358 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:24:20.036915   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:20.055444   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:20.059762   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:20.072851   69358 mustload.go:65] Loading cluster: ha-326307
	I0919 22:24:20.073081   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:20.073298   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:24:20.091365   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:24:20.091605   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.4
	I0919 22:24:20.091616   69358 certs.go:194] generating shared ca certs ...
	I0919 22:24:20.091629   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.091746   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:24:20.091786   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:24:20.091796   69358 certs.go:256] generating profile certs ...
	I0919 22:24:20.091865   69358 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:24:20.091891   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604
	I0919 22:24:20.091905   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:24:20.372898   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 ...
	I0919 22:24:20.372943   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604: {Name:mk9b724916886d4c69140cc45e23ce082460d116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.373186   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604 ...
	I0919 22:24:20.373210   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604: {Name:mkfc0cd42f96faa2f697a81fc7ca671182c3cea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.373311   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:24:20.373471   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:24:20.373649   69358 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:24:20.373668   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:20.373682   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:20.373692   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:20.373703   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:20.373713   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:20.373723   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:20.373733   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:20.373743   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:20.373795   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:24:20.373823   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:20.373832   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:24:20.373856   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:24:20.373878   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:20.373899   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:20.373936   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:24:20.373962   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:24:20.373976   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:20.373987   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:24:20.374034   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:24:20.394051   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:24:20.484593   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:20.489010   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:20.503471   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:20.507649   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:24:20.522195   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:20.526410   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:20.541840   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:20.546043   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:24:20.560364   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:20.564230   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:20.577547   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:20.581387   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:20.594800   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:20.622991   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:20.651461   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:20.678113   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:20.705292   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:24:20.732489   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:20.762310   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:20.789808   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:20.819251   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:24:20.851010   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:20.879714   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:24:20.908177   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:20.928644   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:24:20.949340   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:20.969391   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:24:20.989837   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:21.011118   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:21.031485   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:21.052354   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:24:21.058486   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:24:21.069582   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.074372   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.074440   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.082186   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:21.092957   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:24:21.104085   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.108193   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.108258   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.116078   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:21.127607   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:21.139338   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.143794   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.143848   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.151321   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:21.162759   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:21.166499   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:21.166555   69358 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0919 22:24:21.166642   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:21.166677   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:21.166738   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:21.180123   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:21.180202   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:21.180261   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:21.189900   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:21.189963   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:21.200336   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:24:21.220715   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:21.244525   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:21.268789   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:21.272885   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:21.285764   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:21.362911   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:21.394403   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:24:21.394691   69358 start.go:317] joinCluster: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.394850   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:21.394898   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:24:21.419020   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:24:21.569927   69358 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:21.569980   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uhifqr.okdtfjqzhuoxbb2e --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:24:32.089764   69358 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uhifqr.okdtfjqzhuoxbb2e --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.519762438s)
	I0919 22:24:32.089793   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:24:32.309566   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307-m03 minikube.k8s.io/updated_at=2025_09_19T22_24_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=false
	I0919 22:24:32.391142   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-326307-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:24:32.471336   69358 start.go:319] duration metric: took 11.076641052s to joinCluster
	I0919 22:24:32.471402   69358 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:32.471770   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:32.473461   69358 out.go:179] * Verifying Kubernetes components...
	I0919 22:24:32.475427   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:32.579664   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:32.593786   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:24:32.593856   69358 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:24:32.594084   69358 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m03" to be "Ready" ...
	W0919 22:24:34.597297   69358 node_ready.go:57] node "ha-326307-m03" has "Ready":"False" status (will retry)
	I0919 22:24:35.098269   69358 node_ready.go:49] node "ha-326307-m03" is "Ready"
	I0919 22:24:35.098296   69358 node_ready.go:38] duration metric: took 2.504196997s for node "ha-326307-m03" to be "Ready" ...
	I0919 22:24:35.098310   69358 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:24:35.098358   69358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:24:35.111440   69358 api_server.go:72] duration metric: took 2.640014462s to wait for apiserver process to appear ...
	I0919 22:24:35.111465   69358 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:24:35.111483   69358 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:24:35.115724   69358 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:24:35.116810   69358 api_server.go:141] control plane version: v1.34.0
	I0919 22:24:35.116837   69358 api_server.go:131] duration metric: took 5.364462ms to wait for apiserver health ...
	I0919 22:24:35.116849   69358 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:24:35.123343   69358 system_pods.go:59] 27 kube-system pods found
	I0919 22:24:35.123372   69358 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:35.123377   69358 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:35.123380   69358 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:35.123384   69358 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:24:35.123387   69358 system_pods.go:61] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Pending
	I0919 22:24:35.123390   69358 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:35.123393   69358 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:24:35.123400   69358 system_pods.go:61] "kindnet-pnj9r" [14a458fc-0e9d-42e9-9473-f7f2a6f7f571] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-pnj9r": pod kindnet-pnj9r is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123408   69358 system_pods.go:61] "kindnet-qxwpq" [173e48ec-ef56-4824-9f55-a04b199b7943] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qxwpq": pod kindnet-qxwpq is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123416   69358 system_pods.go:61] "kindnet-wcct9" [5472dcae-344b-43fb-84d1-8d0d41852cd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wcct9": pod kindnet-wcct9 is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123427   69358 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:35.123433   69358 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:24:35.123445   69358 system_pods.go:61] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Pending
	I0919 22:24:35.123450   69358 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:35.123454   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running
	I0919 22:24:35.123457   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Pending
	I0919 22:24:35.123461   69358 system_pods.go:61] "kube-proxy-6nmjx" [81414747-6c4e-495e-a28d-cb17f0c0c306] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-6nmjx": pod kube-proxy-6nmjx is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123465   69358 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:35.123469   69358 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:24:35.123472   69358 system_pods.go:61] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ws89d": pod kube-proxy-ws89d is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123477   69358 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:35.123481   69358 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running
	I0919 22:24:35.123487   69358 system_pods.go:61] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Pending
	I0919 22:24:35.123489   69358 system_pods.go:61] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:35.123492   69358 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:24:35.123496   69358 system_pods.go:61] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Pending
	I0919 22:24:35.123503   69358 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:35.123511   69358 system_pods.go:74] duration metric: took 6.65469ms to wait for pod list to return data ...
	I0919 22:24:35.123525   69358 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:24:35.126592   69358 default_sa.go:45] found service account: "default"
	I0919 22:24:35.126616   69358 default_sa.go:55] duration metric: took 3.083846ms for default service account to be created ...
	I0919 22:24:35.126627   69358 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:24:35.131895   69358 system_pods.go:86] 27 kube-system pods found
	I0919 22:24:35.131928   69358 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:35.131936   69358 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:35.131941   69358 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:35.131946   69358 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:24:35.131950   69358 system_pods.go:89] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Pending
	I0919 22:24:35.131954   69358 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:35.131959   69358 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:24:35.131968   69358 system_pods.go:89] "kindnet-pnj9r" [14a458fc-0e9d-42e9-9473-f7f2a6f7f571] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-pnj9r": pod kindnet-pnj9r is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131975   69358 system_pods.go:89] "kindnet-qxwpq" [173e48ec-ef56-4824-9f55-a04b199b7943] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qxwpq": pod kindnet-qxwpq is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131986   69358 system_pods.go:89] "kindnet-wcct9" [5472dcae-344b-43fb-84d1-8d0d41852cd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wcct9": pod kindnet-wcct9 is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131993   69358 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:35.132003   69358 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:24:35.132009   69358 system_pods.go:89] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Pending
	I0919 22:24:35.132015   69358 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:35.132022   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running
	I0919 22:24:35.132028   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Pending
	I0919 22:24:35.132035   69358 system_pods.go:89] "kube-proxy-6nmjx" [81414747-6c4e-495e-a28d-cb17f0c0c306] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-6nmjx": pod kube-proxy-6nmjx is already assigned to node "ha-326307-m03")
	I0919 22:24:35.132044   69358 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:35.132050   69358 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:24:35.132057   69358 system_pods.go:89] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ws89d": pod kube-proxy-ws89d is already assigned to node "ha-326307-m03")
	I0919 22:24:35.132067   69358 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:35.132076   69358 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running
	I0919 22:24:35.132082   69358 system_pods.go:89] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Pending
	I0919 22:24:35.132090   69358 system_pods.go:89] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:35.132096   69358 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:24:35.132101   69358 system_pods.go:89] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Pending
	I0919 22:24:35.132107   69358 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:35.132117   69358 system_pods.go:126] duration metric: took 5.483041ms to wait for k8s-apps to be running ...
	I0919 22:24:35.132130   69358 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:24:35.132201   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:24:35.145901   69358 system_svc.go:56] duration metric: took 13.762213ms WaitForService to wait for kubelet
	I0919 22:24:35.145934   69358 kubeadm.go:578] duration metric: took 2.67451015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:35.145953   69358 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:24:35.149091   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149114   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149122   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149126   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149129   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149133   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149137   69358 node_conditions.go:105] duration metric: took 3.180117ms to run NodePressure ...
	I0919 22:24:35.149147   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:24:35.149187   69358 start.go:255] writing updated cluster config ...
	I0919 22:24:35.149520   69358 ssh_runner.go:195] Run: rm -f paused
	I0919 22:24:35.153920   69358 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:24:35.154452   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:35.158459   69358 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9j5pw" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.164361   69358 pod_ready.go:94] pod "coredns-66bc5c9577-9j5pw" is "Ready"
	I0919 22:24:35.164388   69358 pod_ready.go:86] duration metric: took 5.90604ms for pod "coredns-66bc5c9577-9j5pw" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.164396   69358 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wqvzd" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.170275   69358 pod_ready.go:94] pod "coredns-66bc5c9577-wqvzd" is "Ready"
	I0919 22:24:35.170305   69358 pod_ready.go:86] duration metric: took 5.903438ms for pod "coredns-66bc5c9577-wqvzd" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.221651   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.227692   69358 pod_ready.go:94] pod "etcd-ha-326307" is "Ready"
	I0919 22:24:35.227721   69358 pod_ready.go:86] duration metric: took 6.035355ms for pod "etcd-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.227738   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.234705   69358 pod_ready.go:94] pod "etcd-ha-326307-m02" is "Ready"
	I0919 22:24:35.234755   69358 pod_ready.go:86] duration metric: took 6.991962ms for pod "etcd-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.234769   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.355285   69358 request.go:683] "Waited before sending request" delay="120.371513ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326307-m03"
	I0919 22:24:35.555444   69358 request.go:683] "Waited before sending request" delay="196.344855ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:35.955374   69358 request.go:683] "Waited before sending request" delay="196.276117ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:35.958866   69358 pod_ready.go:94] pod "etcd-ha-326307-m03" is "Ready"
	I0919 22:24:35.958897   69358 pod_ready.go:86] duration metric: took 724.121102ms for pod "etcd-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.155371   69358 request.go:683] "Waited before sending request" delay="196.353052ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:24:36.158952   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.355354   69358 request.go:683] "Waited before sending request" delay="196.272183ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307"
	I0919 22:24:36.555231   69358 request.go:683] "Waited before sending request" delay="196.389456ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307"
	I0919 22:24:36.558900   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307" is "Ready"
	I0919 22:24:36.558927   69358 pod_ready.go:86] duration metric: took 399.940435ms for pod "kube-apiserver-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.558936   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.755357   69358 request.go:683] "Waited before sending request" delay="196.333509ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307-m02"
	I0919 22:24:36.955622   69358 request.go:683] "Waited before sending request" delay="196.371107ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:36.958850   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307-m02" is "Ready"
	I0919 22:24:36.958881   69358 pod_ready.go:86] duration metric: took 399.937855ms for pod "kube-apiserver-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.958892   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.155391   69358 request.go:683] "Waited before sending request" delay="196.40338ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307-m03"
	I0919 22:24:37.355336   69358 request.go:683] "Waited before sending request" delay="196.255836ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:37.358527   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307-m03" is "Ready"
	I0919 22:24:37.358558   69358 pod_ready.go:86] duration metric: took 399.659411ms for pod "kube-apiserver-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.555013   69358 request.go:683] "Waited before sending request" delay="196.298446ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0919 22:24:37.559362   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.755832   69358 request.go:683] "Waited before sending request" delay="196.350309ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307"
	I0919 22:24:37.954837   69358 request.go:683] "Waited before sending request" delay="195.286624ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307"
	I0919 22:24:37.958236   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307" is "Ready"
	I0919 22:24:37.958266   69358 pod_ready.go:86] duration metric: took 398.878465ms for pod "kube-controller-manager-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.958274   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.155758   69358 request.go:683] "Waited before sending request" delay="197.394867ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m02"
	I0919 22:24:38.355929   69358 request.go:683] "Waited before sending request" delay="196.396129ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:38.359268   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307-m02" is "Ready"
	I0919 22:24:38.359292   69358 pod_ready.go:86] duration metric: took 401.013168ms for pod "kube-controller-manager-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.359301   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.555606   69358 request.go:683] "Waited before sending request" delay="196.234039ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m03"
	I0919 22:24:38.755574   69358 request.go:683] "Waited before sending request" delay="196.387697ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:38.955366   69358 request.go:683] "Waited before sending request" delay="95.227976ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m03"
	I0919 22:24:39.154881   69358 request.go:683] "Waited before sending request" delay="196.301821ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:39.555649   69358 request.go:683] "Waited before sending request" delay="192.377634ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:39.955251   69358 request.go:683] "Waited before sending request" delay="92.286577ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	W0919 22:24:40.366591   69358 pod_ready.go:104] pod "kube-controller-manager-ha-326307-m03" is not "Ready", error: <nil>
	W0919 22:24:42.367386   69358 pod_ready.go:104] pod "kube-controller-manager-ha-326307-m03" is not "Ready", error: <nil>
	I0919 22:24:43.367824   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307-m03" is "Ready"
	I0919 22:24:43.367860   69358 pod_ready.go:86] duration metric: took 5.00855284s for pod "kube-controller-manager-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.371145   69358 pod_ready.go:83] waiting for pod "kube-proxy-8kxtv" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.376946   69358 pod_ready.go:94] pod "kube-proxy-8kxtv" is "Ready"
	I0919 22:24:43.376975   69358 pod_ready.go:86] duration metric: took 5.786362ms for pod "kube-proxy-8kxtv" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.376985   69358 pod_ready.go:83] waiting for pod "kube-proxy-q8mtj" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.555396   69358 request.go:683] "Waited before sending request" delay="178.323112ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8mtj"
	I0919 22:24:43.755331   69358 request.go:683] "Waited before sending request" delay="196.35612ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:43.758666   69358 pod_ready.go:94] pod "kube-proxy-q8mtj" is "Ready"
	I0919 22:24:43.758695   69358 pod_ready.go:86] duration metric: took 381.70368ms for pod "kube-proxy-q8mtj" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.758704   69358 pod_ready.go:83] waiting for pod "kube-proxy-ws89d" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.955265   69358 request.go:683] "Waited before sending request" delay="196.399278ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ws89d"
	I0919 22:24:44.155007   69358 request.go:683] "Waited before sending request" delay="196.303687ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:44.354881   69358 request.go:683] "Waited before sending request" delay="95.2124ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ws89d"
	I0919 22:24:44.555609   69358 request.go:683] "Waited before sending request" delay="197.246504ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:44.955613   69358 request.go:683] "Waited before sending request" delay="192.471154ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:45.355390   69358 request.go:683] "Waited before sending request" delay="92.281537ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	W0919 22:24:45.765195   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:48.265294   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:50.765471   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:53.265410   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:55.265474   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:57.765267   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:59.765483   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:02.266617   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:04.766256   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:07.265177   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:09.265694   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:11.765032   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:13.765313   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:15.766278   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	I0919 22:25:17.764644   69358 pod_ready.go:94] pod "kube-proxy-ws89d" is "Ready"
	I0919 22:25:17.764670   69358 pod_ready.go:86] duration metric: took 34.005951783s for pod "kube-proxy-ws89d" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.767738   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.772985   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307" is "Ready"
	I0919 22:25:17.773015   69358 pod_ready.go:86] duration metric: took 5.246042ms for pod "kube-scheduler-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.773023   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.778916   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307-m02" is "Ready"
	I0919 22:25:17.778942   69358 pod_ready.go:86] duration metric: took 5.914033ms for pod "kube-scheduler-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.778951   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.784122   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307-m03" is "Ready"
	I0919 22:25:17.784165   69358 pod_ready.go:86] duration metric: took 5.193982ms for pod "kube-scheduler-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.784183   69358 pod_ready.go:40] duration metric: took 42.630226972s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:17.833559   69358 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 22:25:17.835536   69358 out.go:179] * Done! kubectl is now configured to use "ha-326307" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7791f71e5d5a5       8c811b4aec35f       12 minutes ago      Running             busybox                   0                   b5e0c0fffea25       busybox-7b57f96db7-m8swj
	ca68bbc020e20       52546a367cc9e       14 minutes ago      Running             coredns                   0                   132023f334782       coredns-66bc5c9577-9j5pw
	1f618dc8f0392       52546a367cc9e       14 minutes ago      Running             coredns                   0                   a5ac32b4949ab       coredns-66bc5c9577-wqvzd
	f52d2d9f5881b       6e38f40d628db       14 minutes ago      Running             storage-provisioner       0                   7b77cca917bf4       storage-provisioner
	365cc00c2e009       409467f978b4a       14 minutes ago      Running             kindnet-cni               0                   96e027ec2b5fb       kindnet-gxnzs
	bd9e41958ffbb       df0860106674d       14 minutes ago      Running             kube-proxy                0                   06da62af16945       kube-proxy-8kxtv
	c6c963d9a0cae       765655ea60781       14 minutes ago      Running             kube-vip                  0                   5717652da0ef4       kube-vip-ha-326307
	456a0c3cbf5ce       46169d968e920       14 minutes ago      Running             kube-scheduler            0                   f02b9e82ff9b1       kube-scheduler-ha-326307
	05ab0247624a7       a0af72f2ec6d6       14 minutes ago      Running             kube-controller-manager   0                   6026f58e8c23a       kube-controller-manager-ha-326307
	e5c59a6abe977       5f1f5298c888d       14 minutes ago      Running             etcd                      0                   5f89382a468ad       etcd-ha-326307
	e80d65e3c7c18       90550c43ad2bc       14 minutes ago      Running             kube-apiserver            0                   3813626701bd1       kube-apiserver-ha-326307
	
	
	==> containerd <==
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.754439323Z" level=info msg="CreateContainer within sandbox \"a5ac32b4949abcc8c1007cd2947e92633d80d759aeaf0e7d6b490f2610f81170\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.768027085Z" level=info msg="CreateContainer within sandbox \"a5ac32b4949abcc8c1007cd2947e92633d80d759aeaf0e7d6b490f2610f81170\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\""
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.768844132Z" level=info msg="StartContainer for \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\""
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.836885904Z" level=info msg="StartContainer for \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\" returns successfully"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.632881043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9j5pw,Uid:7d073e38-b63e-494d-bda0-3dde372a950b,Namespace:kube-system,Attempt:0,}"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.759782586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9j5pw,Uid:7d073e38-b63e-494d-bda0-3dde372a950b,Namespace:kube-system,Attempt:0,} returns sandbox id \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.765750080Z" level=info msg="CreateContainer within sandbox \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.779792584Z" level=info msg="CreateContainer within sandbox \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.780572301Z" level=info msg="StartContainer for \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.854015268Z" level=info msg="StartContainer for \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\" returns successfully"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.151709073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-m8swj,Uid:7533a5f9-7c6d-4476-9e03-eb8abe0aadbc,Namespace:default,Attempt:0,}"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.267660233Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.268098400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-m8swj,Uid:7533a5f9-7c6d-4476-9e03-eb8abe0aadbc,Namespace:default,Attempt:0,} returns sandbox id \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\""
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.270196453Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.412014033Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.413088793Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.414707234Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.417602556Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.418335313Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 2.148090964s"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.418383876Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.423388311Z" level=info msg="CreateContainer within sandbox \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.442455841Z" level=info msg="CreateContainer within sandbox \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.443119612Z" level=info msg="StartContainer for \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.497884940Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.500641712Z" level=info msg="StartContainer for \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\" returns successfully"
	
	
	==> coredns [1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54337 - 24572 "HINFO IN 5143313645322175939.5313042790825403134. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069732464s
	[INFO] 10.244.0.4:35490 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000326279s
	[INFO] 10.244.0.4:55330 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.014239882s
	[INFO] 10.244.1.2:39628 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210602s
	[INFO] 10.244.1.2:46891 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.001261026s
	[INFO] 10.244.1.2:43124 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.00098216s
	[INFO] 10.244.1.2:49555 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.00013424s
	[INFO] 10.244.0.4:40362 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00024135s
	[INFO] 10.244.0.4:45629 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168694s
	[INFO] 10.244.1.2:52354 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189457s
	[INFO] 10.244.1.2:43857 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161715s
	[INFO] 10.244.1.2:51922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145764s
	[INFO] 10.244.1.2:57320 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009888497s
	[INFO] 10.244.1.2:49841 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169285s
	[INFO] 10.244.0.4:51548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159656s
	[INFO] 10.244.0.4:48681 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110507s
	[INFO] 10.244.1.2:52993 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137337s
	[INFO] 10.244.0.4:59855 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113524s
	[INFO] 10.244.1.2:56284 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188608s
	[INFO] 10.244.1.2:58675 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149085s
	[INFO] 10.244.1.2:38911 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131505s
	
	
	==> coredns [ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93] <==
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49588 - 50300 "HINFO IN 9047056621409016881.2982736294753326061. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063128768s
	[INFO] 10.244.0.4:39328 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.014249598s
	[INFO] 10.244.0.4:59759 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.013332957s
	[INFO] 10.244.0.4:50336 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.008865788s
	[INFO] 10.244.1.2:42753 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000158745s
	[INFO] 10.244.0.4:52334 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261159s
	[INFO] 10.244.0.4:43558 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010645205s
	[INFO] 10.244.0.4:51059 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154122s
	[INFO] 10.244.0.4:46147 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012594143s
	[INFO] 10.244.0.4:39163 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143755s
	[INFO] 10.244.0.4:57061 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014731s
	[INFO] 10.244.1.2:59502 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000129746s
	[INFO] 10.244.1.2:49570 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172915s
	[INFO] 10.244.1.2:48519 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000175653s
	[INFO] 10.244.0.4:50569 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000326714s
	[INFO] 10.244.0.4:45465 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000234038s
	[INFO] 10.244.1.2:52569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176154s
	[INFO] 10.244.1.2:36719 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000205481s
	[INFO] 10.244.1.2:58705 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195468s
	[INFO] 10.244.0.4:38035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216169s
	[INFO] 10.244.0.4:52287 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129599s
	[INFO] 10.244.0.4:37285 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000186803s
	[INFO] 10.244.1.2:39163 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165716s
	
	
	==> describe nodes <==
	Name:               ha-326307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_23_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:23:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:38:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-326307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 2616418f44a84ee78b49dce19e95d1fb
	  System UUID:                9c3f30ed-68b2-4a1c-af95-9031ae210a78
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-m8swj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-9j5pw             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 coredns-66bc5c9577-wqvzd             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 etcd-ha-326307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kindnet-gxnzs                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-326307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-326307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-8kxtv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-326307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-326307                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	
	
	Name:               ha-326307-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:38:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-326307-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 e4f3b60b3b464269bc193e23d4361613
	  System UUID:                8095cd89-f43b-4d8a-adef-b40d6aaa7ad2
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tfpvf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-326307-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kindnet-mk6pv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-326307-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-326307-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-q8mtj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-326307-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-326307-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        14m   kube-proxy       
	  Normal  RegisteredNode  14m   node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode  14m   node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	
	
	Name:               ha-326307-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:38:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-326307-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 1434e19b2a274233a619428a76d99322
	  System UUID:                5814a8d4-c435-490f-8e5e-a8b038e01be7
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-jdczt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-326307-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-dmxl8                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-326307-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-326307-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-ws89d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-326307-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-326307-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  13m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	
	
	==> dmesg <==
	[Sep19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001853] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001006] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.090013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.467231] i8042: Warning: Keylock active
	[  +0.009747] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004210] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001075] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000901] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000908] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001042] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001465] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.534799] block sda: the capability attribute has been deprecated.
	[  +0.099617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027269] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.089616] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [e5c59a6abe97751de42afd27010936d1c3a401fad6cd730e75a1692a895b4fbc] <==
	{"level":"warn","ts":"2025-09-19T22:24:25.352519Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.352532Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.355631Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5512420eb470d1ce","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-19T22:24:25.355692Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.355712Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.427429Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.428290Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:24:25.447984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:32950","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:24:25.491427Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(6130034673728934350 12593026477526642892 16449250771884659557)"}
	{"level":"info","ts":"2025-09-19T22:24:25.491593Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.491634Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:24:25.493734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:32956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:25.530775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:32980","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:24:25.607668Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"e4477a6cd7815365","bytes":946167,"size":"946 kB","took":"30.009579431s"}
	{"level":"info","ts":"2025-09-19T22:24:29.797825Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:31.923615Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:35.871798Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:53.749925Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:55.314881Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"5512420eb470d1ce","bytes":1356311,"size":"1.4 MB","took":"30.015547589s"}
	{"level":"info","ts":"2025-09-19T22:33:30.750666Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1558}
	{"level":"info","ts":"2025-09-19T22:33:30.775074Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1558,"took":"23.935678ms","hash":623549535,"current-db-size-bytes":4292608,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":2306048,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-09-19T22:33:30.775132Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":623549535,"revision":1558,"compact-revision":-1}
	{"level":"info","ts":"2025-09-19T22:37:33.574674Z","caller":"traceutil/trace.go:172","msg":"trace[1629775233] transaction","detail":"{read_only:false; response_revision:2889; number_of_response:1; }","duration":"112.632235ms","start":"2025-09-19T22:37:33.462006Z","end":"2025-09-19T22:37:33.574639Z","steps":["trace[1629775233] 'process raft request'  (duration: 112.400333ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:37:33.947726Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.776182ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040082596420208 > lease_revoke:<id:51ce99641422bfa2>","response":"size:29"}
	{"level":"info","ts":"2025-09-19T22:37:33.947978Z","caller":"traceutil/trace.go:172","msg":"trace[2038413] transaction","detail":"{read_only:false; response_revision:2890; number_of_response:1; }","duration":"121.321226ms","start":"2025-09-19T22:37:33.826642Z","end":"2025-09-19T22:37:33.947963Z","steps":["trace[2038413] 'process raft request'  (duration: 121.201718ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:38:16 up  1:20,  0 users,  load average: 1.68, 0.89, 0.80
	Linux ha-326307 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [365cc00c2e009eeed7e71d1202a4d406c12f0d9faee38762ba691eb2d7c71f89] <==
	I0919 22:37:30.996562       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:37:40.997255       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:37:40.997293       1 main.go:301] handling current node
	I0919 22:37:40.997312       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:37:40.997319       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:37:40.997531       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:37:40.997546       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:37:50.998652       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:37:50.998692       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:37:50.998942       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:37:50.998959       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:37:50.999080       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:37:50.999094       1 main.go:301] handling current node
	I0919 22:38:00.998283       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:38:00.998315       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:38:00.998535       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:38:00.998550       1 main.go:301] handling current node
	I0919 22:38:00.998563       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:38:00.998569       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:38:10.990811       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:38:10.990869       1 main.go:301] handling current node
	I0919 22:38:10.990889       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:38:10.990896       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:38:10.991255       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:38:10.991276       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [e80d65e3c7c18da87f7fa003e39382f5a4285ba4782fc295197421c6b882a161] <==
	I0919 22:32:15.996526       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:22.110278       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:33:31.733595       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:33:36.316232       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:33:41.440724       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:43.430235       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:04.843923       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:47.576277       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:36:07.778568       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:37:07.288814       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:37:22.531524       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43412: use of closed network connection
	E0919 22:37:22.776721       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43434: use of closed network connection
	E0919 22:37:22.970082       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43448: use of closed network connection
	E0919 22:37:23.110093       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43464: use of closed network connection
	E0919 22:37:23.308629       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43484: use of closed network connection
	E0919 22:37:23.494833       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43500: use of closed network connection
	E0919 22:37:23.634448       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43520: use of closed network connection
	E0919 22:37:23.803885       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43532: use of closed network connection
	E0919 22:37:23.968210       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43546: use of closed network connection
	E0919 22:37:26.548300       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43614: use of closed network connection
	E0919 22:37:26.721861       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43630: use of closed network connection
	E0919 22:37:26.901556       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43648: use of closed network connection
	E0919 22:37:27.077249       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43672: use of closed network connection
	E0919 22:37:27.253310       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43700: use of closed network connection
	I0919 22:37:36.706481       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [05ab0247624a7f8ffa6bc948e3abc3adc49911c297291eb0a6dd42e3df39f4cd] <==
	I0919 22:23:38.744532       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:23:38.744726       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0919 22:23:38.744739       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0919 22:23:38.744729       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:23:38.744737       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:23:38.744759       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 22:23:38.745195       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 22:23:38.745255       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:23:38.746448       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:23:38.748706       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 22:23:38.750017       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.750086       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:23:38.751270       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 22:23:38.760899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 22:23:38.760926       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 22:23:38.760971       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 22:23:38.765332       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.771790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:08.307746       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m02\" does not exist"
	I0919 22:24:08.319829       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:24:08.699971       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m02"
	E0919 22:24:31.036531       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8ztpb failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8ztpb\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:24:31.706808       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m03\" does not exist"
	I0919 22:24:31.736561       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:24:33.715916       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m03"
	
	
	==> kube-proxy [bd9e41958ffbbb27dab3d180a56fb27df36a1a1896db3a6f322a8aabcda57677] <==
	I0919 22:23:40.183862       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:23:40.251957       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:23:40.353105       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:23:40.353291       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:23:40.353503       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:23:40.383440       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:23:40.383522       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:23:40.391534       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:23:40.391999       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:23:40.392045       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:23:40.394189       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:23:40.394304       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:23:40.394470       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:23:40.394480       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:23:40.394266       1 config.go:200] "Starting service config controller"
	I0919 22:23:40.394506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:23:40.394279       1 config.go:309] "Starting node config controller"
	I0919 22:23:40.394533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:23:40.394540       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:23:40.494617       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:23:40.494643       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:23:40.494649       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [456a0c3cbf5ce028b9cbac658728c1fee13ad8e2659bfa0c625cd685d711c708] <==
	E0919 22:23:33.115040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:23:33.129278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:23:33.194774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:23:33.337699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0919 22:23:35.055089       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:24:08.346116       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:08.346301       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:08.365410       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-78xs2" node="ha-326307-m02"
	E0919 22:24:08.365600       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="kube-system/kindnet-78xs2"
	E0919 22:24:08.379248       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"kindnet-78xs2\" not found" pod="kube-system/kindnet-78xs2"
	E0919 22:24:10.002296       1 schedule_one.go:975] "Scheduler cache AssumePod failed" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:10.002334       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	I0919 22:24:10.002368       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:31.751287       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-pnj9r" node="ha-326307-m03"
	E0919 22:24:31.751375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-pnj9r"
	E0919 22:24:31.887089       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:31.887576       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 173e48ec-ef56-4824-9f55-a04b199b7943(kube-system/kindnet-qxwpq) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	E0919 22:24:31.887605       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	I0919 22:24:31.888969       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:35.828083       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-nzzlb" node="ha-326307-m03"
	E0919 22:24:35.828187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-nzzlb"
	E0919 22:24:35.839864       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	E0919 22:24:35.839940       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ba4fd407-2e93-4324-ab2d-4f192d79fdf5(kube-system/kindnet-dmxl8) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	E0919 22:24:35.839964       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	I0919 22:24:35.841757       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	
	
	==> kubelet <==
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638035    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-cni-cfg\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638087    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-xtables-lock\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638115    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70be5fcc-7ab6-4eb1-870d-988fee1a01bb-kube-proxy\") pod \"kube-proxy-8kxtv\" (UID: \"70be5fcc-7ab6-4eb1-870d-988fee1a01bb\") " pod="kube-system/kube-proxy-8kxtv"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140870    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64376c4d-1b82-490d-887d-7f628b134014-config-volume\") pod \"coredns-66bc5c9577-wqvzd\" (UID: \"64376c4d-1b82-490d-887d-7f628b134014\") " pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140945    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d073e38-b63e-494d-bda0-3dde372a950b-config-volume\") pod \"coredns-66bc5c9577-9j5pw\" (UID: \"7d073e38-b63e-494d-bda0-3dde372a950b\") " pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140976    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkhk\" (UniqueName: \"kubernetes.io/projected/64376c4d-1b82-490d-887d-7f628b134014-kube-api-access-8tkhk\") pod \"coredns-66bc5c9577-wqvzd\" (UID: \"64376c4d-1b82-490d-887d-7f628b134014\") " pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.141004    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gmbw\" (UniqueName: \"kubernetes.io/projected/7d073e38-b63e-494d-bda0-3dde372a950b-kube-api-access-8gmbw\") pod \"coredns-66bc5c9577-9j5pw\" (UID: \"7d073e38-b63e-494d-bda0-3dde372a950b\") " pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319752    1670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\""
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319858    1670 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\"" pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319884    1670 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\"" pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319966    1670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wqvzd_kube-system(64376c4d-1b82-490d-887d-7f628b134014)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wqvzd_kube-system(64376c4d-1b82-490d-887d-7f628b134014)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\\\": failed to find network info for sandbox \\\"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\\\"\"" pod="kube-system/coredns-66bc5c9577-wqvzd" podUID="64376c4d-1b82-490d-887d-7f628b134014"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332044    1670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\""
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332130    1670 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\"" pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332205    1670 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\"" pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332288    1670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9j5pw_kube-system(7d073e38-b63e-494d-bda0-3dde372a950b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9j5pw_kube-system(7d073e38-b63e-494d-bda0-3dde372a950b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\\\": failed to find network info for sandbox \\\"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\\\"\"" pod="kube-system/coredns-66bc5c9577-9j5pw" podUID="7d073e38-b63e-494d-bda0-3dde372a950b"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.543914    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cafe04c6-2dce-4b93-b6d1-205efc39b360-tmp\") pod \"storage-provisioner\" (UID: \"cafe04c6-2dce-4b93-b6d1-205efc39b360\") " pod="kube-system/storage-provisioner"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.543969    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47vqf\" (UniqueName: \"kubernetes.io/projected/cafe04c6-2dce-4b93-b6d1-205efc39b360-kube-api-access-47vqf\") pod \"storage-provisioner\" (UID: \"cafe04c6-2dce-4b93-b6d1-205efc39b360\") " pod="kube-system/storage-provisioner"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.684901    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gxnzs" podStartSLOduration=1.68487896 podStartE2EDuration="1.68487896s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:40.684630982 +0000 UTC m=+6.151051272" watchObservedRunningTime="2025-09-19 22:23:40.68487896 +0000 UTC m=+6.151299251"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.685802    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8kxtv" podStartSLOduration=1.685781067 podStartE2EDuration="1.685781067s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:40.670987608 +0000 UTC m=+6.137407898" watchObservedRunningTime="2025-09-19 22:23:40.685781067 +0000 UTC m=+6.152201360"
	Sep 19 22:23:41 ha-326307 kubelet[1670]: I0919 22:23:41.676063    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.676036489 podStartE2EDuration="1.676036489s" podCreationTimestamp="2025-09-19 22:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:41.675998333 +0000 UTC m=+7.142418624" watchObservedRunningTime="2025-09-19 22:23:41.676036489 +0000 UTC m=+7.142456778"
	Sep 19 22:23:45 ha-326307 kubelet[1670]: I0919 22:23:45.164667    1670 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:23:45 ha-326307 kubelet[1670]: I0919 22:23:45.165981    1670 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:23:52 ha-326307 kubelet[1670]: I0919 22:23:52.703916    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wqvzd" podStartSLOduration=13.703896267 podStartE2EDuration="13.703896267s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:52.703429297 +0000 UTC m=+18.169849612" watchObservedRunningTime="2025-09-19 22:23:52.703896267 +0000 UTC m=+18.170316558"
	Sep 19 22:23:56 ha-326307 kubelet[1670]: I0919 22:23:56.724956    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9j5pw" podStartSLOduration=17.724936721 podStartE2EDuration="17.724936721s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:56.724564031 +0000 UTC m=+22.190984322" watchObservedRunningTime="2025-09-19 22:23:56.724936721 +0000 UTC m=+22.191357012"
	Sep 19 22:25:18 ha-326307 kubelet[1670]: I0919 22:25:18.904730    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt2kb\" (UniqueName: \"kubernetes.io/projected/7533a5f9-7c6d-4476-9e03-eb8abe0aadbc-kube-api-access-rt2kb\") pod \"busybox-7b57f96db7-m8swj\" (UID: \"7533a5f9-7c6d-4476-9e03-eb8abe0aadbc\") " pod="default/busybox-7b57f96db7-m8swj"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-326307 -n ha-326307
helpers_test.go:269: (dbg) Run:  kubectl --context ha-326307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-jdczt
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/CopyFile]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-326307 describe pod busybox-7b57f96db7-jdczt
helpers_test.go:290: (dbg) kubectl --context ha-326307 describe pod busybox-7b57f96db7-jdczt:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-jdczt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ha-326307-m03/192.168.49.4
	Start Time:       Fri, 19 Sep 2025 22:25:18 +0000
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwg8l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xwg8l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                  Age                   From               Message
	  ----     ------                  ----                  ----               -------
	  Warning  FailedScheduling        12m                   default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-jdczt": pod busybox-7b57f96db7-jdczt is already assigned to node "ha-326307-m03"
	  Warning  FailedScheduling        12m                   default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-jdczt": pod busybox-7b57f96db7-jdczt is already assigned to node "ha-326307-m03"
	  Normal   Scheduled               12m                   default-scheduler  Successfully assigned default/busybox-7b57f96db7-jdczt to ha-326307-m03
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f949ef20a496c1ef4510b9586bfdf0aa02ea1ca9948f762b5576ef36acab80c9": failed to find network info for sandbox "f949ef20a496c1ef4510b9586bfdf0aa02ea1ca9948f762b5576ef36acab80c9"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "306b20c8e47aaeb0b6ae068e406157020ceddab45da2b4f2ab7d80c0e47f4391": failed to find network info for sandbox "306b20c8e47aaeb0b6ae068e406157020ceddab45da2b4f2ab7d80c0e47f4391"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e6c50e24733dc1514dd610f1c51f99bc1a57d10929036ae887a87c4b187b9ac1": failed to find network info for sandbox "e6c50e24733dc1514dd610f1c51f99bc1a57d10929036ae887a87c4b187b9ac1"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9d4e7715d1c071862624112264db649229347a018044c9075df60fb9940c8e8a": failed to find network info for sandbox "9d4e7715d1c071862624112264db649229347a018044c9075df60fb9940c8e8a"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d188384b76e1cf43ce05a368351c54023e455d3fd0fddf79dc0d717558b93ee6": failed to find network info for sandbox "d188384b76e1cf43ce05a368351c54023e455d3fd0fddf79dc0d717558b93ee6"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "726dbde7347664ddd373a329867d125e92a7173ca43b01448ce154579a81a0bb": failed to find network info for sandbox "726dbde7347664ddd373a329867d125e92a7173ca43b01448ce154579a81a0bb"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e45f823e38e8e10ed14077a52e1750763b0c366d8a775bbf53f656c10861f185": failed to find network info for sandbox "e45f823e38e8e10ed14077a52e1750763b0c366d8a775bbf53f656c10861f185"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "92ae39dcfd89289f5a5fdc5ae0c23196a91b58214916aead7f95620a2697c009": failed to find network info for sandbox "92ae39dcfd89289f5a5fdc5ae0c23196a91b58214916aead7f95620a2697c009"
	  Warning  FailedCreatePodSandBox  11m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "98a14fefd702a6a8ff4d95d8bebac62053e439ee134d840d76e175ba4e8c45d6": failed to find network info for sandbox "98a14fefd702a6a8ff4d95d8bebac62053e439ee134d840d76e175ba4e8c45d6"
	  Warning  FailedCreatePodSandBox  2m51s (x39 over 11m)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "68cb483b1808e73ea325cca055c7a7f1bd2a591a81aa8b2bcb8cb96560fd08b2": failed to find network info for sandbox "68cb483b1808e73ea325cca055c7a7f1bd2a591a81aa8b2bcb8cb96560fd08b2"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (16.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-326307 node stop m02 --alsologtostderr -v 5: (12.021337929s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5: exit status 7 (547.887616ms)

                                                
                                                
-- stdout --
	ha-326307
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-326307-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:38:29.765492   95282 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:38:29.765795   95282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:29.765806   95282 out.go:374] Setting ErrFile to fd 2...
	I0919 22:38:29.765810   95282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:29.765987   95282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:38:29.766179   95282 out.go:368] Setting JSON to false
	I0919 22:38:29.766201   95282 mustload.go:65] Loading cluster: ha-326307
	I0919 22:38:29.766341   95282 notify.go:220] Checking for updates...
	I0919 22:38:29.766734   95282 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:38:29.766764   95282 status.go:174] checking status of ha-326307 ...
	I0919 22:38:29.767418   95282 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:38:29.787055   95282 status.go:371] ha-326307 host status = "Running" (err=<nil>)
	I0919 22:38:29.787111   95282 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:38:29.787429   95282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:38:29.806532   95282 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:38:29.807007   95282 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:29.807082   95282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:38:29.827523   95282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:38:29.920429   95282 ssh_runner.go:195] Run: systemctl --version
	I0919 22:38:29.925351   95282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:29.938568   95282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:38:29.997096   95282 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-19 22:38:29.986873412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:38:29.997617   95282 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:29.997645   95282 api_server.go:166] Checking apiserver status ...
	I0919 22:38:29.997679   95282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:30.010595   95282 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0919 22:38:30.020922   95282 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:30.020968   95282 ssh_runner.go:195] Run: ls
	I0919 22:38:30.024739   95282 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:30.028954   95282 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:30.028978   95282 status.go:463] ha-326307 apiserver status = Running (err=<nil>)
	I0919 22:38:30.028989   95282 status.go:176] ha-326307 status: &{Name:ha-326307 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:30.029011   95282 status.go:174] checking status of ha-326307-m02 ...
	I0919 22:38:30.029289   95282 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:38:30.047883   95282 status.go:371] ha-326307-m02 host status = "Stopped" (err=<nil>)
	I0919 22:38:30.047911   95282 status.go:384] host is not running, skipping remaining checks
	I0919 22:38:30.047932   95282 status.go:176] ha-326307-m02 status: &{Name:ha-326307-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:30.047963   95282 status.go:174] checking status of ha-326307-m03 ...
	I0919 22:38:30.048286   95282 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:38:30.068032   95282 status.go:371] ha-326307-m03 host status = "Running" (err=<nil>)
	I0919 22:38:30.068055   95282 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:38:30.068334   95282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:38:30.086648   95282 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:38:30.086906   95282 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:30.086945   95282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:38:30.107139   95282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:38:30.200363   95282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:30.212500   95282 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:30.212528   95282 api_server.go:166] Checking apiserver status ...
	I0919 22:38:30.212558   95282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:30.224359   95282 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	W0919 22:38:30.234725   95282 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:30.234771   95282 ssh_runner.go:195] Run: ls
	I0919 22:38:30.238438   95282 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:30.242863   95282 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:30.242887   95282 status.go:463] ha-326307-m03 apiserver status = Running (err=<nil>)
	I0919 22:38:30.242896   95282 status.go:176] ha-326307-m03 status: &{Name:ha-326307-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:30.242910   95282 status.go:174] checking status of ha-326307-m04 ...
	I0919 22:38:30.243137   95282 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:38:30.263706   95282 status.go:371] ha-326307-m04 host status = "Stopped" (err=<nil>)
	I0919 22:38:30.263731   95282 status.go:384] host is not running, skipping remaining checks
	I0919 22:38:30.263737   95282 status.go:176] ha-326307-m04 status: &{Name:ha-326307-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5": ha-326307
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-326307-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-326307-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-326307-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5": ha-326307
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-326307-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-326307-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-326307-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-326307
helpers_test.go:243: (dbg) docker inspect ha-326307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	        "Created": "2025-09-19T22:23:18.619000062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 69921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:23:18.670514121Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hosts",
	        "LogPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3-json.log",
	        "Name": "/ha-326307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-326307:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-326307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	                "LowerDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-326307",
	                "Source": "/var/lib/docker/volumes/ha-326307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-326307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-326307",
	                "name.minikube.sigs.k8s.io": "ha-326307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b9c61cd0152986e2b265b3cf0a7628b1c049e495ce30493b8e54f6b9446115f",
	            "SandboxKey": "/var/run/docker/netns/8b9c61cd0152",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-326307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:80:09:d2:65:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "465af21e2d8d112a34f14f7c3ab89eac4e6e57f582ff8e32b514381f55dd085e",
	                    "EndpointID": "f35735061c65841c2c1ba7f2859db25885582588fa8f2d14e3a015320f6c3fc4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-326307",
	                        "5e0f1fe86b08"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-326307 -n ha-326307
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-326307 logs -n 25: (1.307080107s)
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile865153148/001/cp-test_ha-326307-m03.txt │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ cp      │ ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307:/home/docker/cp-test_ha-326307-m03_ha-326307.txt                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307 sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307.txt                                                │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ cp      │ ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307-m02:/home/docker/cp-test_ha-326307-m03_ha-326307-m02.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m02 sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m02.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ cp      │ ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307-m04:/home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp testdata/cp-test.txt ha-326307-m04:/home/docker/cp-test.txt                                                            │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile865153148/001/cp-test_ha-326307-m04.txt │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307:/home/docker/cp-test_ha-326307-m04_ha-326307.txt                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307.txt                                                │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m02:/home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m02 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m03:/home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ node    │ ha-326307 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:23:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:23:13.527478   69358 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:23:13.527574   69358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:13.527579   69358 out.go:374] Setting ErrFile to fd 2...
	I0919 22:23:13.527586   69358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:13.527823   69358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:23:13.528355   69358 out.go:368] Setting JSON to false
	I0919 22:23:13.529260   69358 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3938,"bootTime":1758316656,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:23:13.529345   69358 start.go:140] virtualization: kvm guest
	I0919 22:23:13.531661   69358 out.go:179] * [ha-326307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:23:13.533198   69358 notify.go:220] Checking for updates...
	I0919 22:23:13.533231   69358 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:23:13.534827   69358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:23:13.536340   69358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:23:13.537773   69358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 22:23:13.539372   69358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:23:13.541189   69358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:23:13.542697   69358 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:23:13.568228   69358 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:23:13.568380   69358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:13.622546   69358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:23:13.612893654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:13.622646   69358 docker.go:318] overlay module found
	I0919 22:23:13.624668   69358 out.go:179] * Using the docker driver based on user configuration
	I0919 22:23:13.626116   69358 start.go:304] selected driver: docker
	I0919 22:23:13.626134   69358 start.go:918] validating driver "docker" against <nil>
	I0919 22:23:13.626147   69358 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:23:13.626725   69358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:13.684385   69358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:23:13.672811393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:13.684569   69358 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:23:13.684775   69358 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:23:13.686618   69358 out.go:179] * Using Docker driver with root privileges
	I0919 22:23:13.687924   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:13.688000   69358 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:23:13.688014   69358 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:23:13.688089   69358 start.go:348] cluster config:
	{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m
0s}
	I0919 22:23:13.689601   69358 out.go:179] * Starting "ha-326307" primary control-plane node in "ha-326307" cluster
	I0919 22:23:13.691305   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:23:13.692823   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:23:13.694304   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:13.694378   69358 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 22:23:13.694398   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:23:13.694426   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:23:13.694515   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:23:13.694533   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:23:13.694981   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:13.695014   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json: {Name:mk9e3af266bcfbabd18624d7d22535c6f1841e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:13.716737   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:23:13.716759   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:23:13.716776   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:23:13.716797   69358 start.go:360] acquireMachinesLock for ha-326307: {Name:mk42b79b90944aab63c8b37c2f94e04ca1ebec1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:23:13.716893   69358 start.go:364] duration metric: took 80.537µs to acquireMachinesLock for "ha-326307"
	I0919 22:23:13.716915   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:13.716974   69358 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:23:13.719062   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:23:13.719317   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:23:13.719352   69358 client.go:168] LocalClient.Create starting
	I0919 22:23:13.719447   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:23:13.719502   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:13.719517   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:13.719580   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:23:13.719600   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:13.719610   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:13.719933   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:23:13.737609   69358 cli_runner.go:211] docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:23:13.737699   69358 network_create.go:284] running [docker network inspect ha-326307] to gather additional debugging logs...
	I0919 22:23:13.737725   69358 cli_runner.go:164] Run: docker network inspect ha-326307
	W0919 22:23:13.755400   69358 cli_runner.go:211] docker network inspect ha-326307 returned with exit code 1
	I0919 22:23:13.755437   69358 network_create.go:287] error running [docker network inspect ha-326307]: docker network inspect ha-326307: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-326307 not found
	I0919 22:23:13.755455   69358 network_create.go:289] output of [docker network inspect ha-326307]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-326307 not found
	
	** /stderr **
	I0919 22:23:13.755563   69358 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:13.774541   69358 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018eb270}
	I0919 22:23:13.774578   69358 network_create.go:124] attempt to create docker network ha-326307 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:23:13.774619   69358 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-326307 ha-326307
	I0919 22:23:13.834699   69358 network_create.go:108] docker network ha-326307 192.168.49.0/24 created
	I0919 22:23:13.834730   69358 kic.go:121] calculated static IP "192.168.49.2" for the "ha-326307" container
	I0919 22:23:13.834799   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:23:13.852316   69358 cli_runner.go:164] Run: docker volume create ha-326307 --label name.minikube.sigs.k8s.io=ha-326307 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:23:13.872969   69358 oci.go:103] Successfully created a docker volume ha-326307
	I0919 22:23:13.873115   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307 --entrypoint /usr/bin/test -v ha-326307:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:23:14.277718   69358 oci.go:107] Successfully prepared a docker volume ha-326307
	I0919 22:23:14.277762   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:14.277789   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:23:14.277852   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:23:18.547851   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.269954037s)
	I0919 22:23:18.547886   69358 kic.go:203] duration metric: took 4.270092787s to extract preloaded images to volume ...
	W0919 22:23:18.548002   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:23:18.548044   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:23:18.548091   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:23:18.602395   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307 --name ha-326307 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307 --network ha-326307 --ip 192.168.49.2 --volume ha-326307:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:23:18.902433   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Running}}
	I0919 22:23:18.923488   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:18.945324   69358 cli_runner.go:164] Run: docker exec ha-326307 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:23:18.998198   69358 oci.go:144] the created container "ha-326307" has a running status.
	I0919 22:23:18.998254   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa...
	I0919 22:23:19.305578   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:23:19.305639   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:23:19.338987   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:19.361057   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:23:19.361077   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:23:19.423644   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:19.446710   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:23:19.446815   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.468914   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.469178   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.469194   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:23:19.609654   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:23:19.609685   69358 ubuntu.go:182] provisioning hostname "ha-326307"
	I0919 22:23:19.609806   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.631352   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.631769   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.631790   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307 && echo "ha-326307" | sudo tee /etc/hostname
	I0919 22:23:19.783770   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:23:19.783868   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.802757   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.802967   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.802990   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:23:19.942778   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:23:19.942811   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:23:19.942925   69358 ubuntu.go:190] setting up certificates
	I0919 22:23:19.942949   69358 provision.go:84] configureAuth start
	I0919 22:23:19.943010   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:19.963444   69358 provision.go:143] copyHostCerts
	I0919 22:23:19.963491   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:19.963531   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:23:19.963541   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:19.963629   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:23:19.963778   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:19.963807   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:23:19.963811   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:19.963862   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:23:19.963997   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:19.964030   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:23:19.964040   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:19.964080   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:23:19.964187   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307 san=[127.0.0.1 192.168.49.2 ha-326307 localhost minikube]
	I0919 22:23:20.747311   69358 provision.go:177] copyRemoteCerts
	I0919 22:23:20.747377   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:23:20.747410   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:20.766468   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:20.866991   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:23:20.867057   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:23:20.897799   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:23:20.897858   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:23:20.925953   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:23:20.926026   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:23:20.954845   69358 provision.go:87] duration metric: took 1.011880735s to configureAuth
	I0919 22:23:20.954872   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:23:20.955074   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:20.955089   69358 machine.go:96] duration metric: took 1.508356629s to provisionDockerMachine
	I0919 22:23:20.955096   69358 client.go:171] duration metric: took 7.235738314s to LocalClient.Create
	I0919 22:23:20.955122   69358 start.go:167] duration metric: took 7.235806728s to libmachine.API.Create "ha-326307"
	I0919 22:23:20.955128   69358 start.go:293] postStartSetup for "ha-326307" (driver="docker")
	I0919 22:23:20.955136   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:23:20.955224   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:23:20.955259   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:20.975767   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.077921   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:23:21.081820   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:23:21.081872   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:23:21.081881   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:23:21.081888   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:23:21.081901   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:23:21.081973   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:23:21.082057   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:23:21.082071   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:23:21.082204   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:23:21.092245   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:21.123732   69358 start.go:296] duration metric: took 168.590139ms for postStartSetup
	I0919 22:23:21.124127   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:21.143109   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:21.143414   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:23:21.143466   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.162970   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.258062   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:23:21.263437   69358 start.go:128] duration metric: took 7.546444684s to createHost
	I0919 22:23:21.263491   69358 start.go:83] releasing machines lock for "ha-326307", held for 7.546570423s
	I0919 22:23:21.263561   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:21.282251   69358 ssh_runner.go:195] Run: cat /version.json
	I0919 22:23:21.282309   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.282391   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:23:21.282539   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.302076   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.302858   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.477003   69358 ssh_runner.go:195] Run: systemctl --version
	I0919 22:23:21.481946   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:23:21.486736   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:23:21.519470   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:23:21.519573   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:23:21.549703   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:23:21.549736   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:23:21.549772   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:23:21.549813   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:23:21.563897   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:23:21.577043   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:23:21.577104   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:23:21.591898   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:23:21.607905   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:23:21.677531   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:23:21.749223   69358 docker.go:234] disabling docker service ...
	I0919 22:23:21.749348   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:23:21.771648   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:23:21.786268   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:23:21.864247   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:23:21.930620   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:23:21.943680   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:23:21.963319   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:23:21.977473   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:23:21.989630   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:23:21.989705   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:23:22.001778   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:22.013415   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:23:22.024683   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:22.036042   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:23:22.047238   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:23:22.060239   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:23:22.074324   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:23:22.087081   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:23:22.099883   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:23:22.110348   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:22.180253   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:23:22.295748   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:23:22.295832   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:23:22.300535   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:23:22.300597   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:23:22.304676   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:23:22.344790   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:23:22.344850   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:22.371338   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:22.400934   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:23:22.402669   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:22.421952   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:23:22.426523   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:22.442415   69358 kubeadm.go:875] updating cluster {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:23:22.442712   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:22.442823   69358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:23:22.482684   69358 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:23:22.482710   69358 containerd.go:534] Images already preloaded, skipping extraction
	I0919 22:23:22.482762   69358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:23:22.518500   69358 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:23:22.518526   69358 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:23:22.518533   69358 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0919 22:23:22.518616   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:23:22.518668   69358 ssh_runner.go:195] Run: sudo crictl info
	I0919 22:23:22.554956   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:22.554993   69358 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:23:22.555004   69358 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:23:22.555029   69358 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-326307 NodeName:ha-326307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:23:22.555176   69358 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-326307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:23:22.555209   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:23:22.555273   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:23:22.568901   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:23:22.569038   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:23:22.569091   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:23:22.580223   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:23:22.580317   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:23:22.591268   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 22:23:22.612688   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:23:22.636770   69358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0919 22:23:22.658657   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:23:22.681384   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:23:22.685531   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:22.698340   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:22.769217   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:23:22.792280   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.2
	I0919 22:23:22.792300   69358 certs.go:194] generating shared ca certs ...
	I0919 22:23:22.792315   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.792509   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:23:22.792553   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:23:22.792563   69358 certs.go:256] generating profile certs ...
	I0919 22:23:22.792630   69358 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:23:22.792643   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt with IP's: []
	I0919 22:23:22.975725   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt ...
	I0919 22:23:22.975759   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt: {Name:mk32bca88dd6748516774b56251f96e4fc38a69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.975973   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key ...
	I0919 22:23:22.975990   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key: {Name:mkc0e836c004e527dbd2787dc00463a0715cf8a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.976108   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226
	I0919 22:23:22.976125   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:23:23.460427   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 ...
	I0919 22:23:23.460460   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226: {Name:mk98859e0e43a6d4b4da591dc89695908954cc81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.460672   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226 ...
	I0919 22:23:23.460693   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226: {Name:mk3473c1668aec72ec5a5598645b70e29415cdd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.460941   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:23:23.461078   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:23:23.461207   69358 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:23:23.461233   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt with IP's: []
	I0919 22:23:23.489621   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt ...
	I0919 22:23:23.489652   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt: {Name:mk06f3b4cfde33781bd7076ead00f94525257452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.489837   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key ...
	I0919 22:23:23.489860   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key: {Name:mk632a617a99ac85bf5a9b022d1173caf8e7b208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.489978   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:23:23.490003   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:23:23.490018   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:23:23.490034   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:23:23.490051   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:23:23.490069   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:23:23.490087   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:23:23.490100   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:23:23.490185   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:23:23.490228   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:23:23.490238   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:23:23.490273   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:23:23.490304   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:23:23.490333   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:23:23.490390   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:23.490435   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.490455   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.490497   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.491033   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:23:23.517815   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:23:23.544857   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:23:23.571386   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:23:23.600966   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:23:23.629855   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:23:23.657907   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:23:23.685564   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:23:23.713503   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:23:23.745344   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:23:23.774311   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:23:23.807603   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:23:23.832523   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:23:23.839649   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:23:23.851364   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.856325   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.856396   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.864469   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:23:23.876649   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:23:23.888129   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.892889   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.892949   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.901167   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:23:23.912487   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:23:23.924831   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.929289   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.929357   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.937110   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:23:23.948517   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:23:23.952948   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:23:23.953011   69358 kubeadm.go:392] StartCluster: {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:23.953080   69358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 22:23:23.953122   69358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:23:23.991138   69358 cri.go:89] found id: ""
	I0919 22:23:23.991247   69358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:23:24.003111   69358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:23:24.013643   69358 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:23:24.013714   69358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:23:24.024557   69358 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:23:24.024576   69358 kubeadm.go:157] found existing configuration files:
	
	I0919 22:23:24.024633   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:23:24.035252   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:23:24.035322   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:23:24.045590   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:23:24.056529   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:23:24.056590   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:23:24.066716   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:23:24.077570   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:23:24.077653   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:23:24.088177   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:23:24.098372   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:23:24.098426   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:23:24.108265   69358 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:23:24.149643   69358 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:23:24.149730   69358 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:23:24.166048   69358 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:23:24.166117   69358 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:23:24.166172   69358 kubeadm.go:310] OS: Linux
	I0919 22:23:24.166213   69358 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:23:24.166275   69358 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:23:24.166357   69358 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:23:24.166446   69358 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:23:24.166536   69358 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:23:24.166608   69358 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:23:24.166683   69358 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:23:24.166760   69358 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:23:24.230351   69358 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:23:24.230487   69358 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:23:24.230602   69358 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:23:24.238806   69358 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:23:24.243498   69358 out.go:252]   - Generating certificates and keys ...
	I0919 22:23:24.243610   69358 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:23:24.243715   69358 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:23:24.335199   69358 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:23:24.361175   69358 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:23:24.769077   69358 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:23:25.053293   69358 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:23:25.392067   69358 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:23:25.392251   69358 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-326307 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:23:25.629558   69358 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:23:25.629706   69358 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-326307 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:23:26.141828   69358 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:23:26.343650   69358 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:23:26.737207   69358 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:23:26.737292   69358 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:23:27.020543   69358 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:23:27.208963   69358 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:23:27.382044   69358 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:23:27.660395   69358 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:23:27.867964   69358 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:23:27.868475   69358 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:23:27.870857   69358 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:23:27.873408   69358 out.go:252]   - Booting up control plane ...
	I0919 22:23:27.873545   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:23:27.873665   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:23:27.873811   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:23:27.884709   69358 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:23:27.884874   69358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:23:27.892815   69358 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:23:27.893043   69358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:23:27.893108   69358 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:23:27.981591   69358 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:23:27.981772   69358 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:23:29.484085   69358 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501867716s
	I0919 22:23:29.488057   69358 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:23:29.488269   69358 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:23:29.488401   69358 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:23:29.488636   69358 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:23:31.058022   69358 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.569932465s
	I0919 22:23:31.762139   69358 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.27419796s
	I0919 22:23:33.991284   69358 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.503282233s
	I0919 22:23:34.005767   69358 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:23:34.017935   69358 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:23:34.032336   69358 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:23:34.032534   69358 kubeadm.go:310] [mark-control-plane] Marking the node ha-326307 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:23:34.042496   69358 kubeadm.go:310] [bootstrap-token] Using token: ym5hq4.pw1tvtip1io4ljbf
	I0919 22:23:34.044381   69358 out.go:252]   - Configuring RBAC rules ...
	I0919 22:23:34.044558   69358 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:23:34.048649   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:23:34.057509   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:23:34.061297   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:23:34.064926   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:23:34.069534   69358 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:23:34.399239   69358 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:23:34.818126   69358 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:23:35.398001   69358 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:23:35.398907   69358 kubeadm.go:310] 
	I0919 22:23:35.399007   69358 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:23:35.399035   69358 kubeadm.go:310] 
	I0919 22:23:35.399120   69358 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:23:35.399149   69358 kubeadm.go:310] 
	I0919 22:23:35.399207   69358 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:23:35.399301   69358 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:23:35.399350   69358 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:23:35.399356   69358 kubeadm.go:310] 
	I0919 22:23:35.399402   69358 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:23:35.399408   69358 kubeadm.go:310] 
	I0919 22:23:35.399470   69358 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:23:35.399481   69358 kubeadm.go:310] 
	I0919 22:23:35.399554   69358 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:23:35.399644   69358 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:23:35.399706   69358 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:23:35.399712   69358 kubeadm.go:310] 
	I0919 22:23:35.399803   69358 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:23:35.399888   69358 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:23:35.399892   69358 kubeadm.go:310] 
	I0919 22:23:35.399971   69358 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ym5hq4.pw1tvtip1io4ljbf \
	I0919 22:23:35.400068   69358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 \
	I0919 22:23:35.400089   69358 kubeadm.go:310] 	--control-plane 
	I0919 22:23:35.400093   69358 kubeadm.go:310] 
	I0919 22:23:35.400204   69358 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:23:35.400217   69358 kubeadm.go:310] 
	I0919 22:23:35.400285   69358 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ym5hq4.pw1tvtip1io4ljbf \
	I0919 22:23:35.400382   69358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 
	I0919 22:23:35.403119   69358 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:23:35.403274   69358 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:23:35.403305   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:35.403317   69358 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:23:35.407302   69358 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:23:35.409983   69358 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:23:35.415011   69358 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:23:35.415039   69358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:23:35.436210   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:23:35.679694   69358 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:23:35.679756   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:35.679779   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307 minikube.k8s.io/updated_at=2025_09_19T22_23_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=true
	I0919 22:23:35.787076   69358 ops.go:34] apiserver oom_adj: -16
	I0919 22:23:35.787237   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:36.287327   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:36.787300   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:37.287415   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:37.788066   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:38.287401   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:38.787731   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.288028   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.788301   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.864456   69358 kubeadm.go:1105] duration metric: took 4.184765822s to wait for elevateKubeSystemPrivileges
	I0919 22:23:39.864500   69358 kubeadm.go:394] duration metric: took 15.911493151s to StartCluster
	I0919 22:23:39.864524   69358 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:39.864601   69358 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:23:39.865911   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:39.866255   69358 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:39.866275   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:23:39.866288   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:23:39.866297   69358 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:23:39.866377   69358 addons.go:69] Setting storage-provisioner=true in profile "ha-326307"
	I0919 22:23:39.866398   69358 addons.go:238] Setting addon storage-provisioner=true in "ha-326307"
	I0919 22:23:39.866400   69358 addons.go:69] Setting default-storageclass=true in profile "ha-326307"
	I0919 22:23:39.866428   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:39.866523   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:39.866434   69358 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-326307"
	I0919 22:23:39.866921   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.867012   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.892851   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:23:39.893863   69358 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:23:39.893944   69358 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:23:39.893953   69358 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:23:39.894002   69358 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:23:39.894061   69358 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:23:39.893888   69358 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:23:39.894642   69358 addons.go:238] Setting addon default-storageclass=true in "ha-326307"
	I0919 22:23:39.894691   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:39.895196   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.895724   69358 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:23:39.897293   69358 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:23:39.897315   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:23:39.897386   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:39.923915   69358 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:23:39.923939   69358 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:23:39.924001   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:39.926323   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:39.953300   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:39.968501   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:23:40.065441   69358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:23:40.083647   69358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:23:40.190461   69358 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:23:40.433561   69358 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:23:40.435567   69358 addons.go:514] duration metric: took 569.25898ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:23:40.435633   69358 start.go:246] waiting for cluster config update ...
	I0919 22:23:40.435651   69358 start.go:255] writing updated cluster config ...
	I0919 22:23:40.437510   69358 out.go:203] 
	I0919 22:23:40.439070   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:40.439141   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:40.441238   69358 out.go:179] * Starting "ha-326307-m02" control-plane node in "ha-326307" cluster
	I0919 22:23:40.443382   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:23:40.445749   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:23:40.447079   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:40.447132   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:23:40.447229   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:23:40.447308   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:23:40.447326   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:23:40.447427   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:40.470325   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:23:40.470347   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:23:40.470366   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:23:40.470391   69358 start.go:360] acquireMachinesLock for ha-326307-m02: {Name:mk4919a9b19250804b0f53d01bcd11efaf9a431f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:23:40.470518   69358 start.go:364] duration metric: took 88.309µs to acquireMachinesLock for "ha-326307-m02"
	I0919 22:23:40.470552   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:40.470618   69358 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:23:40.473495   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:23:40.473607   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:23:40.473631   69358 client.go:168] LocalClient.Create starting
	I0919 22:23:40.473689   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:23:40.473724   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:40.473734   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:40.473828   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:23:40.473853   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:40.473861   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:40.474095   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:40.493916   69358 network_create.go:77] Found existing network {name:ha-326307 subnet:0xc000ad7620 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:23:40.493972   69358 kic.go:121] calculated static IP "192.168.49.3" for the "ha-326307-m02" container
	I0919 22:23:40.494055   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:23:40.516112   69358 cli_runner.go:164] Run: docker volume create ha-326307-m02 --label name.minikube.sigs.k8s.io=ha-326307-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:23:40.537046   69358 oci.go:103] Successfully created a docker volume ha-326307-m02
	I0919 22:23:40.537137   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m02 --entrypoint /usr/bin/test -v ha-326307-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:23:40.991997   69358 oci.go:107] Successfully prepared a docker volume ha-326307-m02
	I0919 22:23:40.992038   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:40.992061   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:23:40.992121   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:23:45.362629   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.370467998s)
	I0919 22:23:45.362666   69358 kic.go:203] duration metric: took 4.370603938s to extract preloaded images to volume ...
	W0919 22:23:45.362777   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:23:45.362811   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:23:45.362846   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:23:45.417833   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307-m02 --name ha-326307-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307-m02 --network ha-326307 --ip 192.168.49.3 --volume ha-326307-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:23:45.744363   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Running}}
	I0919 22:23:45.768456   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:45.789293   69358 cli_runner.go:164] Run: docker exec ha-326307-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:23:45.846760   69358 oci.go:144] the created container "ha-326307-m02" has a running status.
	I0919 22:23:45.846794   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa...
	I0919 22:23:46.005236   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:23:46.005288   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:23:46.042640   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:46.067424   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:23:46.067455   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:23:46.132729   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:46.155854   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:23:46.155967   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.177181   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.177511   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.177533   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:23:46.320054   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:23:46.320089   69358 ubuntu.go:182] provisioning hostname "ha-326307-m02"
	I0919 22:23:46.320185   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.341740   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.341951   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.341965   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m02 && echo "ha-326307-m02" | sudo tee /etc/hostname
	I0919 22:23:46.497123   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:23:46.497234   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.520214   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.520436   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.520455   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:23:46.659417   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:23:46.659458   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:23:46.659492   69358 ubuntu.go:190] setting up certificates
	I0919 22:23:46.659505   69358 provision.go:84] configureAuth start
	I0919 22:23:46.659556   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:46.679498   69358 provision.go:143] copyHostCerts
	I0919 22:23:46.679551   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:46.679598   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:23:46.679605   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:46.679712   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:23:46.679851   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:46.679882   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:23:46.679893   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:46.679947   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:23:46.680043   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:46.680141   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:23:46.680185   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:46.680251   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:23:46.680367   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m02 san=[127.0.0.1 192.168.49.3 ha-326307-m02 localhost minikube]
	I0919 22:23:46.869190   69358 provision.go:177] copyRemoteCerts
	I0919 22:23:46.869251   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:23:46.869285   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.888798   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:46.988385   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:23:46.988452   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:23:47.018227   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:23:47.018299   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:23:47.046810   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:23:47.046866   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:23:47.074372   69358 provision.go:87] duration metric: took 414.855982ms to configureAuth
	I0919 22:23:47.074400   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:23:47.074581   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:47.074598   69358 machine.go:96] duration metric: took 918.712366ms to provisionDockerMachine
	I0919 22:23:47.074607   69358 client.go:171] duration metric: took 6.600969352s to LocalClient.Create
	I0919 22:23:47.074631   69358 start.go:167] duration metric: took 6.601023702s to libmachine.API.Create "ha-326307"
	I0919 22:23:47.074642   69358 start.go:293] postStartSetup for "ha-326307-m02" (driver="docker")
	I0919 22:23:47.074650   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:23:47.074721   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:23:47.074767   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.094538   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.195213   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:23:47.199088   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:23:47.199139   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:23:47.199181   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:23:47.199191   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:23:47.199215   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:23:47.199276   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:23:47.199378   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:23:47.199394   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:23:47.199502   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:23:47.209642   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:47.240945   69358 start.go:296] duration metric: took 166.288086ms for postStartSetup
	I0919 22:23:47.241383   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:47.261061   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:47.261460   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:23:47.261513   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.280359   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.374609   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:23:47.379255   69358 start.go:128] duration metric: took 6.908623332s to createHost
	I0919 22:23:47.379283   69358 start.go:83] releasing machines lock for "ha-326307-m02", held for 6.908753842s
	I0919 22:23:47.379346   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:47.400418   69358 out.go:179] * Found network options:
	I0919 22:23:47.401854   69358 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:23:47.403072   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:23:47.403133   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:23:47.403263   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:23:47.403266   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:23:47.403326   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.403332   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.423928   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.424218   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.597529   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:23:47.630263   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:23:47.630334   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:23:47.661706   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:23:47.661733   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:23:47.661772   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:23:47.661826   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:23:47.675485   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:23:47.687726   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:23:47.687780   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:23:47.701818   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:23:47.717912   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:23:47.789825   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:23:47.863188   69358 docker.go:234] disabling docker service ...
	I0919 22:23:47.863267   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:23:47.881757   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:23:47.893830   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:23:47.963004   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:23:48.034120   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:23:48.046843   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:23:48.065279   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:23:48.078269   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:23:48.089105   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:23:48.089186   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:23:48.099867   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:48.111076   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:23:48.122049   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:48.132648   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:23:48.142263   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:23:48.152876   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:23:48.163459   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:23:48.174096   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:23:48.183483   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:23:48.192780   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:48.261004   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:23:48.364434   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:23:48.364508   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:23:48.368726   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:23:48.368792   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:23:48.372683   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:23:48.409110   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:23:48.409200   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:48.433389   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:48.460529   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:23:48.462207   69358 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:23:48.464087   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:48.482217   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:23:48.486620   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:48.498806   69358 mustload.go:65] Loading cluster: ha-326307
	I0919 22:23:48.499032   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:48.499315   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:48.518576   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:48.518850   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.3
	I0919 22:23:48.518866   69358 certs.go:194] generating shared ca certs ...
	I0919 22:23:48.518885   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.519012   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:23:48.519082   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:23:48.519096   69358 certs.go:256] generating profile certs ...
	I0919 22:23:48.519222   69358 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:23:48.519259   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4
	I0919 22:23:48.519288   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:23:48.963393   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 ...
	I0919 22:23:48.963428   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4: {Name:mk381f64cc0991e3a6417e9586b9565eb7a8dbf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.963635   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4 ...
	I0919 22:23:48.963660   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4: {Name:mk4dbead0b9c36c7a3635520729a1eb2d4b33f13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.963762   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:23:48.963935   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:23:48.964103   69358 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:23:48.964120   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:23:48.964138   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:23:48.964166   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:23:48.964183   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:23:48.964200   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:23:48.964218   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:23:48.964234   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:23:48.964251   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:23:48.964313   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:23:48.964355   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:23:48.964366   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:23:48.964406   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:23:48.964438   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:23:48.964471   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:23:48.964528   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:48.964570   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:23:48.964592   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:23:48.964612   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:48.964731   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:48.983907   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:49.073692   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:23:49.078819   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:23:49.094234   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:23:49.099593   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:23:49.113663   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:23:49.117744   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:23:49.133048   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:23:49.136861   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:23:49.150734   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:23:49.154901   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:23:49.169388   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:23:49.173566   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:23:49.188070   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:23:49.215594   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:23:49.243561   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:23:49.271624   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:23:49.301814   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:23:49.332556   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:23:49.360723   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:23:49.388872   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:23:49.417316   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:23:49.448722   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:23:49.476877   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:23:49.504914   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:23:49.524969   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:23:49.544942   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:23:49.564506   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:23:49.584887   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:23:49.605725   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:23:49.625552   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:23:49.645811   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:23:49.652062   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:23:49.664544   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.668823   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.668889   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.676892   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:23:49.688737   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:23:49.699741   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.703762   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.703823   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.711311   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:23:49.721987   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:23:49.732874   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.737289   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.737351   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.745312   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:23:49.756384   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:23:49.760242   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:23:49.760315   69358 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0919 22:23:49.760415   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:23:49.760438   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:23:49.760476   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:23:49.773427   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:23:49.773499   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:23:49.773549   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:23:49.784237   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:23:49.784306   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:23:49.794534   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:23:49.814529   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:23:49.837846   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:23:49.859421   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:23:49.863859   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:49.876721   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:49.948089   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:23:49.971010   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:49.971327   69358 start.go:317] joinCluster: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:49.971508   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:23:49.971618   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:49.992535   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:50.137695   69358 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:50.137740   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kb90tj.om7zof6htice1y8z --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:24:08.633363   69358 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kb90tj.om7zof6htice1y8z --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.495537277s)
	I0919 22:24:08.633404   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:24:08.849981   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307-m02 minikube.k8s.io/updated_at=2025_09_19T22_24_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=false
	I0919 22:24:08.928109   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-326307-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:24:09.011507   69358 start.go:319] duration metric: took 19.040175049s to joinCluster
	I0919 22:24:09.011590   69358 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:09.011816   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:09.013756   69358 out.go:179] * Verifying Kubernetes components...
	I0919 22:24:09.015232   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:09.115618   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:09.130578   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:24:09.130645   69358 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:24:09.130869   69358 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m02" to be "Ready" ...
	W0919 22:24:11.134373   69358 node_ready.go:57] node "ha-326307-m02" has "Ready":"False" status (will retry)
	I0919 22:24:11.634655   69358 node_ready.go:49] node "ha-326307-m02" is "Ready"
	I0919 22:24:11.634683   69358 node_ready.go:38] duration metric: took 2.503796185s for node "ha-326307-m02" to be "Ready" ...
	I0919 22:24:11.634697   69358 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:24:11.634751   69358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:24:11.647782   69358 api_server.go:72] duration metric: took 2.636155477s to wait for apiserver process to appear ...
	I0919 22:24:11.647812   69358 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:24:11.647848   69358 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:24:11.652005   69358 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:24:11.652952   69358 api_server.go:141] control plane version: v1.34.0
	I0919 22:24:11.652975   69358 api_server.go:131] duration metric: took 5.15649ms to wait for apiserver health ...
	I0919 22:24:11.652984   69358 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:24:11.657535   69358 system_pods.go:59] 17 kube-system pods found
	I0919 22:24:11.657569   69358 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:11.657577   69358 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:11.657581   69358 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:11.657586   69358 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Pending
	I0919 22:24:11.657591   69358 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:11.657598   69358 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Pending: PodScheduled:SchedulerError (pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed)
	I0919 22:24:11.657604   69358 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:11.657609   69358 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Pending
	I0919 22:24:11.657616   69358 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:11.657621   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Pending
	I0919 22:24:11.657626   69358 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:11.657636   69358 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q8mtj": pod kube-proxy-q8mtj is already assigned to node "ha-326307-m02")
	I0919 22:24:11.657642   69358 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:11.657649   69358 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Pending
	I0919 22:24:11.657654   69358 system_pods.go:61] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:11.657660   69358 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Pending
	I0919 22:24:11.657665   69358 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:11.657673   69358 system_pods.go:74] duration metric: took 4.68298ms to wait for pod list to return data ...
	I0919 22:24:11.657687   69358 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:24:11.660430   69358 default_sa.go:45] found service account: "default"
	I0919 22:24:11.660456   69358 default_sa.go:55] duration metric: took 2.762581ms for default service account to be created ...
	I0919 22:24:11.660467   69358 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:24:11.664515   69358 system_pods.go:86] 17 kube-system pods found
	I0919 22:24:11.664549   69358 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:11.664557   69358 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:11.664563   69358 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:11.664567   69358 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Pending
	I0919 22:24:11.664574   69358 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:11.664583   69358 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Pending: PodScheduled:SchedulerError (pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed)
	I0919 22:24:11.664590   69358 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:11.664594   69358 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Pending
	I0919 22:24:11.664600   69358 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:11.664606   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Pending
	I0919 22:24:11.664615   69358 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:11.664623   69358 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q8mtj": pod kube-proxy-q8mtj is already assigned to node "ha-326307-m02")
	I0919 22:24:11.664629   69358 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:11.664637   69358 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Pending
	I0919 22:24:11.664643   69358 system_pods.go:89] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:11.664649   69358 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Pending
	I0919 22:24:11.664653   69358 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:11.664663   69358 system_pods.go:126] duration metric: took 4.189005ms to wait for k8s-apps to be running ...
	I0919 22:24:11.664676   69358 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:24:11.664734   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:24:11.677679   69358 system_svc.go:56] duration metric: took 12.991783ms WaitForService to wait for kubelet
	I0919 22:24:11.677718   69358 kubeadm.go:578] duration metric: took 2.666095008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:11.677741   69358 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:24:11.681219   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:11.681249   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:11.681276   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:11.681282   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:11.681288   69358 node_conditions.go:105] duration metric: took 3.540774ms to run NodePressure ...
	I0919 22:24:11.681302   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:24:11.681336   69358 start.go:255] writing updated cluster config ...
	I0919 22:24:11.683465   69358 out.go:203] 
	I0919 22:24:11.685336   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:11.685480   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:11.687190   69358 out.go:179] * Starting "ha-326307-m03" control-plane node in "ha-326307" cluster
	I0919 22:24:11.688774   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:24:11.690230   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:11.691529   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:24:11.691564   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:11.691570   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:11.691776   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:11.691792   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:24:11.691940   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:11.714494   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:11.714516   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:11.714538   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:11.714564   69358 start.go:360] acquireMachinesLock for ha-326307-m03: {Name:mk07818636650a6efffb19d787e84a34d6f1dd98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:11.714717   69358 start.go:364] duration metric: took 129.412µs to acquireMachinesLock for "ha-326307-m03"
	I0919 22:24:11.714749   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:11.714883   69358 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:24:11.717146   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:11.717288   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:24:11.717325   69358 client.go:168] LocalClient.Create starting
	I0919 22:24:11.717396   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:24:11.717429   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:11.717444   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:11.717499   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:24:11.717523   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:11.717531   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:11.717757   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:11.736709   69358 network_create.go:77] Found existing network {name:ha-326307 subnet:0xc001c6a9f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:11.736749   69358 kic.go:121] calculated static IP "192.168.49.4" for the "ha-326307-m03" container
	I0919 22:24:11.736838   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:11.757855   69358 cli_runner.go:164] Run: docker volume create ha-326307-m03 --label name.minikube.sigs.k8s.io=ha-326307-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:11.780198   69358 oci.go:103] Successfully created a docker volume ha-326307-m03
	I0919 22:24:11.780287   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m03 --entrypoint /usr/bin/test -v ha-326307-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:12.269719   69358 oci.go:107] Successfully prepared a docker volume ha-326307-m03
	I0919 22:24:12.269772   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:24:12.269795   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:12.269864   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:16.658999   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.389088771s)
	I0919 22:24:16.659030   69358 kic.go:203] duration metric: took 4.389232064s to extract preloaded images to volume ...
	W0919 22:24:16.659114   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:16.659151   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:16.659211   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:16.714324   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307-m03 --name ha-326307-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307-m03 --network ha-326307 --ip 192.168.49.4 --volume ha-326307-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:17.029039   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Running}}
	I0919 22:24:17.050534   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.070017   69358 cli_runner.go:164] Run: docker exec ha-326307-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:17.125252   69358 oci.go:144] the created container "ha-326307-m03" has a running status.
	I0919 22:24:17.125293   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa...
	I0919 22:24:17.618351   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:17.618395   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:17.646956   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.667176   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:17.667203   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:17.713667   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.734276   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:17.734370   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:17.755726   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:17.755941   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:17.755953   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:17.894482   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:24:17.894512   69358 ubuntu.go:182] provisioning hostname "ha-326307-m03"
	I0919 22:24:17.894572   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:17.914204   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:17.914507   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:17.914530   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m03 && echo "ha-326307-m03" | sudo tee /etc/hostname
	I0919 22:24:18.068724   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:24:18.068805   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.088244   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:18.088504   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:18.088525   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:18.227353   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:18.227390   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:24:18.227421   69358 ubuntu.go:190] setting up certificates
	I0919 22:24:18.227433   69358 provision.go:84] configureAuth start
	I0919 22:24:18.227496   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.247948   69358 provision.go:143] copyHostCerts
	I0919 22:24:18.247989   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:24:18.248023   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:24:18.248029   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:24:18.248096   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:24:18.248231   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:24:18.248289   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:24:18.248299   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:24:18.248338   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:24:18.248404   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:24:18.248423   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:24:18.248427   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:24:18.248457   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:24:18.248512   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m03 san=[127.0.0.1 192.168.49.4 ha-326307-m03 localhost minikube]
	I0919 22:24:18.393257   69358 provision.go:177] copyRemoteCerts
	I0919 22:24:18.393319   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:18.393353   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.412748   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.514005   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:18.514092   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:24:18.542657   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:18.542733   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:18.569691   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:18.569759   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:18.596329   69358 provision.go:87] duration metric: took 368.876183ms to configureAuth
	I0919 22:24:18.596357   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:18.596551   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:18.596562   69358 machine.go:96] duration metric: took 862.263986ms to provisionDockerMachine
	I0919 22:24:18.596567   69358 client.go:171] duration metric: took 6.879237415s to LocalClient.Create
	I0919 22:24:18.596586   69358 start.go:167] duration metric: took 6.879300568s to libmachine.API.Create "ha-326307"
	I0919 22:24:18.596594   69358 start.go:293] postStartSetup for "ha-326307-m03" (driver="docker")
	I0919 22:24:18.596602   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:18.596644   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:18.596677   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.615349   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.717907   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:18.722093   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:18.722137   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:18.722150   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:18.722173   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:18.722186   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:24:18.722248   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:24:18.722356   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:24:18.722372   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:24:18.722580   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:18.732899   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:24:18.766453   69358 start.go:296] duration metric: took 169.843532ms for postStartSetup
	I0919 22:24:18.766899   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.786322   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:18.786775   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:18.786833   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.806377   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.901798   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:18.907121   69358 start.go:128] duration metric: took 7.192223106s to createHost
	I0919 22:24:18.907180   69358 start.go:83] releasing machines lock for "ha-326307-m03", held for 7.192445142s
	I0919 22:24:18.907266   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.929545   69358 out.go:179] * Found network options:
	I0919 22:24:18.931020   69358 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:24:18.932299   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932334   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932375   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932396   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:18.932501   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:18.932558   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.932588   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:18.932662   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.952990   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.953400   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:19.131622   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:19.165991   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:19.166079   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:19.197850   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:19.197878   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:24:19.197909   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:19.197960   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:24:19.211538   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:19.223959   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:24:19.224009   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:24:19.239088   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:24:19.254102   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:24:19.328965   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:24:19.406808   69358 docker.go:234] disabling docker service ...
	I0919 22:24:19.406888   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:24:19.425948   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:24:19.438801   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:24:19.510941   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:24:19.581470   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:19.594683   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:19.613666   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:19.627192   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:19.638603   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:19.638668   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:19.649965   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:19.661530   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:19.673111   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:19.684782   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:19.696056   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:19.707630   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:19.719687   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:19.731477   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:19.741738   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:19.751963   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:19.822277   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:19.931918   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:24:19.931995   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:24:19.936531   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:24:19.936591   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:24:19.940632   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:19.977944   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:24:19.978013   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:24:20.003290   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:24:20.032714   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:24:20.034190   69358 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:20.035560   69358 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:24:20.036915   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:20.055444   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:20.059762   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:20.072851   69358 mustload.go:65] Loading cluster: ha-326307
	I0919 22:24:20.073081   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:20.073298   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:24:20.091365   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:24:20.091605   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.4
	I0919 22:24:20.091616   69358 certs.go:194] generating shared ca certs ...
	I0919 22:24:20.091629   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.091746   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:24:20.091786   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:24:20.091796   69358 certs.go:256] generating profile certs ...
	I0919 22:24:20.091865   69358 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:24:20.091891   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604
	I0919 22:24:20.091905   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:24:20.372898   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 ...
	I0919 22:24:20.372943   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604: {Name:mk9b724916886d4c69140cc45e23ce082460d116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.373186   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604 ...
	I0919 22:24:20.373210   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604: {Name:mkfc0cd42f96faa2f697a81fc7ca671182c3cea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.373311   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:24:20.373471   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:24:20.373649   69358 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:24:20.373668   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:20.373682   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:20.373692   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:20.373703   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:20.373713   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:20.373723   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:20.373733   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:20.373743   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:20.373795   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:24:20.373823   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:20.373832   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:24:20.373856   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:24:20.373878   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:20.373899   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:20.373936   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:24:20.373962   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:24:20.373976   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:20.373987   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:24:20.374034   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:24:20.394051   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:24:20.484593   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:20.489010   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:20.503471   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:20.507649   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:24:20.522195   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:20.526410   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:20.541840   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:20.546043   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:24:20.560364   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:20.564230   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:20.577547   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:20.581387   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:20.594800   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:20.622991   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:20.651461   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:20.678113   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:20.705292   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:24:20.732489   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:20.762310   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:20.789808   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:20.819251   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:24:20.851010   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:20.879714   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:24:20.908177   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:20.928644   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:24:20.949340   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:20.969391   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:24:20.989837   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:21.011118   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:21.031485   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:21.052354   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:24:21.058486   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:24:21.069582   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.074372   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.074440   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.082186   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:21.092957   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:24:21.104085   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.108193   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.108258   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.116078   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:21.127607   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:21.139338   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.143794   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.143848   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.151321   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:21.162759   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:21.166499   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:21.166555   69358 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0919 22:24:21.166642   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:21.166677   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:21.166738   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:21.180123   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:21.180202   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:21.180261   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:21.189900   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:21.189963   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:21.200336   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:24:21.220715   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:21.244525   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:21.268789   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:21.272885   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:21.285764   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:21.362911   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:21.394403   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:24:21.394691   69358 start.go:317] joinCluster: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.394850   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:21.394898   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:24:21.419020   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:24:21.569927   69358 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:21.569980   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uhifqr.okdtfjqzhuoxbb2e --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:24:32.089764   69358 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uhifqr.okdtfjqzhuoxbb2e --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.519762438s)
	I0919 22:24:32.089793   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:24:32.309566   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307-m03 minikube.k8s.io/updated_at=2025_09_19T22_24_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=false
	I0919 22:24:32.391142   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-326307-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:24:32.471336   69358 start.go:319] duration metric: took 11.076641052s to joinCluster
	I0919 22:24:32.471402   69358 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:32.471770   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:32.473461   69358 out.go:179] * Verifying Kubernetes components...
	I0919 22:24:32.475427   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:32.579664   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:32.593786   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:24:32.593856   69358 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:24:32.594084   69358 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m03" to be "Ready" ...
	W0919 22:24:34.597297   69358 node_ready.go:57] node "ha-326307-m03" has "Ready":"False" status (will retry)
	I0919 22:24:35.098269   69358 node_ready.go:49] node "ha-326307-m03" is "Ready"
	I0919 22:24:35.098296   69358 node_ready.go:38] duration metric: took 2.504196997s for node "ha-326307-m03" to be "Ready" ...
	I0919 22:24:35.098310   69358 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:24:35.098358   69358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:24:35.111440   69358 api_server.go:72] duration metric: took 2.640014462s to wait for apiserver process to appear ...
	I0919 22:24:35.111465   69358 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:24:35.111483   69358 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:24:35.115724   69358 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:24:35.116810   69358 api_server.go:141] control plane version: v1.34.0
	I0919 22:24:35.116837   69358 api_server.go:131] duration metric: took 5.364462ms to wait for apiserver health ...
	I0919 22:24:35.116849   69358 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:24:35.123343   69358 system_pods.go:59] 27 kube-system pods found
	I0919 22:24:35.123372   69358 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:35.123377   69358 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:35.123380   69358 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:35.123384   69358 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:24:35.123387   69358 system_pods.go:61] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Pending
	I0919 22:24:35.123390   69358 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:35.123393   69358 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:24:35.123400   69358 system_pods.go:61] "kindnet-pnj9r" [14a458fc-0e9d-42e9-9473-f7f2a6f7f571] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-pnj9r": pod kindnet-pnj9r is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123408   69358 system_pods.go:61] "kindnet-qxwpq" [173e48ec-ef56-4824-9f55-a04b199b7943] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qxwpq": pod kindnet-qxwpq is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123416   69358 system_pods.go:61] "kindnet-wcct9" [5472dcae-344b-43fb-84d1-8d0d41852cd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wcct9": pod kindnet-wcct9 is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123427   69358 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:35.123433   69358 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:24:35.123445   69358 system_pods.go:61] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Pending
	I0919 22:24:35.123450   69358 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:35.123454   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running
	I0919 22:24:35.123457   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Pending
	I0919 22:24:35.123461   69358 system_pods.go:61] "kube-proxy-6nmjx" [81414747-6c4e-495e-a28d-cb17f0c0c306] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-6nmjx": pod kube-proxy-6nmjx is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123465   69358 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:35.123469   69358 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:24:35.123472   69358 system_pods.go:61] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ws89d": pod kube-proxy-ws89d is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123477   69358 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:35.123481   69358 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running
	I0919 22:24:35.123487   69358 system_pods.go:61] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Pending
	I0919 22:24:35.123489   69358 system_pods.go:61] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:35.123492   69358 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:24:35.123496   69358 system_pods.go:61] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Pending
	I0919 22:24:35.123503   69358 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:35.123511   69358 system_pods.go:74] duration metric: took 6.65469ms to wait for pod list to return data ...
	I0919 22:24:35.123525   69358 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:24:35.126592   69358 default_sa.go:45] found service account: "default"
	I0919 22:24:35.126616   69358 default_sa.go:55] duration metric: took 3.083846ms for default service account to be created ...
	I0919 22:24:35.126627   69358 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:24:35.131895   69358 system_pods.go:86] 27 kube-system pods found
	I0919 22:24:35.131928   69358 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:35.131936   69358 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:35.131941   69358 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:35.131946   69358 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:24:35.131950   69358 system_pods.go:89] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Pending
	I0919 22:24:35.131954   69358 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:35.131959   69358 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:24:35.131968   69358 system_pods.go:89] "kindnet-pnj9r" [14a458fc-0e9d-42e9-9473-f7f2a6f7f571] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-pnj9r": pod kindnet-pnj9r is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131975   69358 system_pods.go:89] "kindnet-qxwpq" [173e48ec-ef56-4824-9f55-a04b199b7943] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qxwpq": pod kindnet-qxwpq is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131986   69358 system_pods.go:89] "kindnet-wcct9" [5472dcae-344b-43fb-84d1-8d0d41852cd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wcct9": pod kindnet-wcct9 is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131993   69358 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:35.132003   69358 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:24:35.132009   69358 system_pods.go:89] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Pending
	I0919 22:24:35.132015   69358 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:35.132022   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running
	I0919 22:24:35.132028   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Pending
	I0919 22:24:35.132035   69358 system_pods.go:89] "kube-proxy-6nmjx" [81414747-6c4e-495e-a28d-cb17f0c0c306] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-6nmjx": pod kube-proxy-6nmjx is already assigned to node "ha-326307-m03")
	I0919 22:24:35.132044   69358 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:35.132050   69358 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:24:35.132057   69358 system_pods.go:89] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ws89d": pod kube-proxy-ws89d is already assigned to node "ha-326307-m03")
	I0919 22:24:35.132067   69358 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:35.132076   69358 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running
	I0919 22:24:35.132082   69358 system_pods.go:89] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Pending
	I0919 22:24:35.132090   69358 system_pods.go:89] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:35.132096   69358 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:24:35.132101   69358 system_pods.go:89] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Pending
	I0919 22:24:35.132107   69358 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:35.132117   69358 system_pods.go:126] duration metric: took 5.483041ms to wait for k8s-apps to be running ...
	I0919 22:24:35.132130   69358 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:24:35.132201   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:24:35.145901   69358 system_svc.go:56] duration metric: took 13.762213ms WaitForService to wait for kubelet
	I0919 22:24:35.145934   69358 kubeadm.go:578] duration metric: took 2.67451015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:35.145953   69358 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:24:35.149091   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149114   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149122   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149126   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149129   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149133   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149137   69358 node_conditions.go:105] duration metric: took 3.180117ms to run NodePressure ...
	I0919 22:24:35.149147   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:24:35.149187   69358 start.go:255] writing updated cluster config ...
	I0919 22:24:35.149520   69358 ssh_runner.go:195] Run: rm -f paused
	I0919 22:24:35.153920   69358 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:24:35.154452   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:35.158459   69358 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9j5pw" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.164361   69358 pod_ready.go:94] pod "coredns-66bc5c9577-9j5pw" is "Ready"
	I0919 22:24:35.164388   69358 pod_ready.go:86] duration metric: took 5.90604ms for pod "coredns-66bc5c9577-9j5pw" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.164396   69358 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wqvzd" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.170275   69358 pod_ready.go:94] pod "coredns-66bc5c9577-wqvzd" is "Ready"
	I0919 22:24:35.170305   69358 pod_ready.go:86] duration metric: took 5.903438ms for pod "coredns-66bc5c9577-wqvzd" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.221651   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.227692   69358 pod_ready.go:94] pod "etcd-ha-326307" is "Ready"
	I0919 22:24:35.227721   69358 pod_ready.go:86] duration metric: took 6.035355ms for pod "etcd-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.227738   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.234705   69358 pod_ready.go:94] pod "etcd-ha-326307-m02" is "Ready"
	I0919 22:24:35.234755   69358 pod_ready.go:86] duration metric: took 6.991962ms for pod "etcd-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.234769   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.355285   69358 request.go:683] "Waited before sending request" delay="120.371513ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326307-m03"
	I0919 22:24:35.555444   69358 request.go:683] "Waited before sending request" delay="196.344855ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:35.955374   69358 request.go:683] "Waited before sending request" delay="196.276117ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:35.958866   69358 pod_ready.go:94] pod "etcd-ha-326307-m03" is "Ready"
	I0919 22:24:35.958897   69358 pod_ready.go:86] duration metric: took 724.121102ms for pod "etcd-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.155371   69358 request.go:683] "Waited before sending request" delay="196.353052ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:24:36.158952   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.355354   69358 request.go:683] "Waited before sending request" delay="196.272183ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307"
	I0919 22:24:36.555231   69358 request.go:683] "Waited before sending request" delay="196.389456ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307"
	I0919 22:24:36.558900   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307" is "Ready"
	I0919 22:24:36.558927   69358 pod_ready.go:86] duration metric: took 399.940435ms for pod "kube-apiserver-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.558936   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.755357   69358 request.go:683] "Waited before sending request" delay="196.333509ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307-m02"
	I0919 22:24:36.955622   69358 request.go:683] "Waited before sending request" delay="196.371107ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:36.958850   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307-m02" is "Ready"
	I0919 22:24:36.958881   69358 pod_ready.go:86] duration metric: took 399.937855ms for pod "kube-apiserver-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.958892   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.155391   69358 request.go:683] "Waited before sending request" delay="196.40338ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307-m03"
	I0919 22:24:37.355336   69358 request.go:683] "Waited before sending request" delay="196.255836ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:37.358527   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307-m03" is "Ready"
	I0919 22:24:37.358558   69358 pod_ready.go:86] duration metric: took 399.659411ms for pod "kube-apiserver-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.555013   69358 request.go:683] "Waited before sending request" delay="196.298446ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0919 22:24:37.559362   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.755832   69358 request.go:683] "Waited before sending request" delay="196.350309ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307"
	I0919 22:24:37.954837   69358 request.go:683] "Waited before sending request" delay="195.286624ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307"
	I0919 22:24:37.958236   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307" is "Ready"
	I0919 22:24:37.958266   69358 pod_ready.go:86] duration metric: took 398.878465ms for pod "kube-controller-manager-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.958274   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.155758   69358 request.go:683] "Waited before sending request" delay="197.394867ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m02"
	I0919 22:24:38.355929   69358 request.go:683] "Waited before sending request" delay="196.396129ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:38.359268   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307-m02" is "Ready"
	I0919 22:24:38.359292   69358 pod_ready.go:86] duration metric: took 401.013168ms for pod "kube-controller-manager-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.359301   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.555606   69358 request.go:683] "Waited before sending request" delay="196.234039ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m03"
	I0919 22:24:38.755574   69358 request.go:683] "Waited before sending request" delay="196.387697ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:38.955366   69358 request.go:683] "Waited before sending request" delay="95.227976ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m03"
	I0919 22:24:39.154881   69358 request.go:683] "Waited before sending request" delay="196.301821ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:39.555649   69358 request.go:683] "Waited before sending request" delay="192.377634ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:39.955251   69358 request.go:683] "Waited before sending request" delay="92.286577ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	W0919 22:24:40.366591   69358 pod_ready.go:104] pod "kube-controller-manager-ha-326307-m03" is not "Ready", error: <nil>
	W0919 22:24:42.367386   69358 pod_ready.go:104] pod "kube-controller-manager-ha-326307-m03" is not "Ready", error: <nil>
	I0919 22:24:43.367824   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307-m03" is "Ready"
	I0919 22:24:43.367860   69358 pod_ready.go:86] duration metric: took 5.00855284s for pod "kube-controller-manager-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.371145   69358 pod_ready.go:83] waiting for pod "kube-proxy-8kxtv" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.376946   69358 pod_ready.go:94] pod "kube-proxy-8kxtv" is "Ready"
	I0919 22:24:43.376975   69358 pod_ready.go:86] duration metric: took 5.786362ms for pod "kube-proxy-8kxtv" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.376985   69358 pod_ready.go:83] waiting for pod "kube-proxy-q8mtj" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.555396   69358 request.go:683] "Waited before sending request" delay="178.323112ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8mtj"
	I0919 22:24:43.755331   69358 request.go:683] "Waited before sending request" delay="196.35612ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:43.758666   69358 pod_ready.go:94] pod "kube-proxy-q8mtj" is "Ready"
	I0919 22:24:43.758695   69358 pod_ready.go:86] duration metric: took 381.70368ms for pod "kube-proxy-q8mtj" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.758704   69358 pod_ready.go:83] waiting for pod "kube-proxy-ws89d" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.955265   69358 request.go:683] "Waited before sending request" delay="196.399278ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ws89d"
	I0919 22:24:44.155007   69358 request.go:683] "Waited before sending request" delay="196.303687ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:44.354881   69358 request.go:683] "Waited before sending request" delay="95.2124ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ws89d"
	I0919 22:24:44.555609   69358 request.go:683] "Waited before sending request" delay="197.246504ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:44.955613   69358 request.go:683] "Waited before sending request" delay="192.471154ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:45.355390   69358 request.go:683] "Waited before sending request" delay="92.281537ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	W0919 22:24:45.765195   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:48.265294   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:50.765471   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:53.265410   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:55.265474   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:57.765267   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:59.765483   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:02.266617   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:04.766256   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:07.265177   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:09.265694   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:11.765032   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:13.765313   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:15.766278   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	I0919 22:25:17.764644   69358 pod_ready.go:94] pod "kube-proxy-ws89d" is "Ready"
	I0919 22:25:17.764670   69358 pod_ready.go:86] duration metric: took 34.005951783s for pod "kube-proxy-ws89d" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.767738   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.772985   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307" is "Ready"
	I0919 22:25:17.773015   69358 pod_ready.go:86] duration metric: took 5.246042ms for pod "kube-scheduler-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.773023   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.778916   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307-m02" is "Ready"
	I0919 22:25:17.778942   69358 pod_ready.go:86] duration metric: took 5.914033ms for pod "kube-scheduler-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.778951   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.784122   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307-m03" is "Ready"
	I0919 22:25:17.784165   69358 pod_ready.go:86] duration metric: took 5.193982ms for pod "kube-scheduler-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.784183   69358 pod_ready.go:40] duration metric: took 42.630226972s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:17.833559   69358 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 22:25:17.835536   69358 out.go:179] * Done! kubectl is now configured to use "ha-326307" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7791f71e5d5a5       8c811b4aec35f       13 minutes ago      Running             busybox                   0                   b5e0c0fffea25       busybox-7b57f96db7-m8swj
	ca68bbc020e20       52546a367cc9e       14 minutes ago      Running             coredns                   0                   132023f334782       coredns-66bc5c9577-9j5pw
	1f618dc8f0392       52546a367cc9e       14 minutes ago      Running             coredns                   0                   a5ac32b4949ab       coredns-66bc5c9577-wqvzd
	f52d2d9f5881b       6e38f40d628db       14 minutes ago      Running             storage-provisioner       0                   7b77cca917bf4       storage-provisioner
	365cc00c2e009       409467f978b4a       14 minutes ago      Running             kindnet-cni               0                   96e027ec2b5fb       kindnet-gxnzs
	bd9e41958ffbb       df0860106674d       14 minutes ago      Running             kube-proxy                0                   06da62af16945       kube-proxy-8kxtv
	c6c963d9a0cae       765655ea60781       14 minutes ago      Running             kube-vip                  0                   5717652da0ef4       kube-vip-ha-326307
	456a0c3cbf5ce       46169d968e920       15 minutes ago      Running             kube-scheduler            0                   f02b9e82ff9b1       kube-scheduler-ha-326307
	05ab0247624a7       a0af72f2ec6d6       15 minutes ago      Running             kube-controller-manager   0                   6026f58e8c23a       kube-controller-manager-ha-326307
	e5c59a6abe977       5f1f5298c888d       15 minutes ago      Running             etcd                      0                   5f89382a468ad       etcd-ha-326307
	e80d65e3c7c18       90550c43ad2bc       15 minutes ago      Running             kube-apiserver            0                   3813626701bd1       kube-apiserver-ha-326307
	
	
	==> containerd <==
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.754439323Z" level=info msg="CreateContainer within sandbox \"a5ac32b4949abcc8c1007cd2947e92633d80d759aeaf0e7d6b490f2610f81170\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.768027085Z" level=info msg="CreateContainer within sandbox \"a5ac32b4949abcc8c1007cd2947e92633d80d759aeaf0e7d6b490f2610f81170\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\""
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.768844132Z" level=info msg="StartContainer for \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\""
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.836885904Z" level=info msg="StartContainer for \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\" returns successfully"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.632881043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9j5pw,Uid:7d073e38-b63e-494d-bda0-3dde372a950b,Namespace:kube-system,Attempt:0,}"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.759782586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9j5pw,Uid:7d073e38-b63e-494d-bda0-3dde372a950b,Namespace:kube-system,Attempt:0,} returns sandbox id \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.765750080Z" level=info msg="CreateContainer within sandbox \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.779792584Z" level=info msg="CreateContainer within sandbox \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.780572301Z" level=info msg="StartContainer for \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.854015268Z" level=info msg="StartContainer for \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\" returns successfully"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.151709073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-m8swj,Uid:7533a5f9-7c6d-4476-9e03-eb8abe0aadbc,Namespace:default,Attempt:0,}"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.267660233Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.268098400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-m8swj,Uid:7533a5f9-7c6d-4476-9e03-eb8abe0aadbc,Namespace:default,Attempt:0,} returns sandbox id \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\""
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.270196453Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.412014033Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.413088793Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.414707234Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.417602556Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.418335313Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 2.148090964s"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.418383876Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.423388311Z" level=info msg="CreateContainer within sandbox \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.442455841Z" level=info msg="CreateContainer within sandbox \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.443119612Z" level=info msg="StartContainer for \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.497884940Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.500641712Z" level=info msg="StartContainer for \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\" returns successfully"
	
	
	==> coredns [1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54337 - 24572 "HINFO IN 5143313645322175939.5313042790825403134. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069732464s
	[INFO] 10.244.0.4:35490 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000326279s
	[INFO] 10.244.0.4:55330 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.014239882s
	[INFO] 10.244.1.2:39628 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210602s
	[INFO] 10.244.1.2:46891 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.001261026s
	[INFO] 10.244.1.2:43124 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.00098216s
	[INFO] 10.244.1.2:49555 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.00013424s
	[INFO] 10.244.0.4:40362 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00024135s
	[INFO] 10.244.0.4:45629 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168694s
	[INFO] 10.244.1.2:52354 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189457s
	[INFO] 10.244.1.2:43857 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161715s
	[INFO] 10.244.1.2:51922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145764s
	[INFO] 10.244.1.2:57320 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009888497s
	[INFO] 10.244.1.2:49841 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169285s
	[INFO] 10.244.0.4:51548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159656s
	[INFO] 10.244.0.4:48681 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110507s
	[INFO] 10.244.1.2:52993 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137337s
	[INFO] 10.244.0.4:59855 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113524s
	[INFO] 10.244.1.2:56284 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188608s
	[INFO] 10.244.1.2:58675 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149085s
	[INFO] 10.244.1.2:38911 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131505s
	
	
	==> coredns [ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93] <==
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49588 - 50300 "HINFO IN 9047056621409016881.2982736294753326061. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063128768s
	[INFO] 10.244.0.4:39328 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.014249598s
	[INFO] 10.244.0.4:59759 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.013332957s
	[INFO] 10.244.0.4:50336 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.008865788s
	[INFO] 10.244.1.2:42753 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000158745s
	[INFO] 10.244.0.4:52334 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261159s
	[INFO] 10.244.0.4:43558 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010645205s
	[INFO] 10.244.0.4:51059 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154122s
	[INFO] 10.244.0.4:46147 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012594143s
	[INFO] 10.244.0.4:39163 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143755s
	[INFO] 10.244.0.4:57061 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014731s
	[INFO] 10.244.1.2:59502 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000129746s
	[INFO] 10.244.1.2:49570 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172915s
	[INFO] 10.244.1.2:48519 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000175653s
	[INFO] 10.244.0.4:50569 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000326714s
	[INFO] 10.244.0.4:45465 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000234038s
	[INFO] 10.244.1.2:52569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176154s
	[INFO] 10.244.1.2:36719 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000205481s
	[INFO] 10.244.1.2:58705 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195468s
	[INFO] 10.244.0.4:38035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216169s
	[INFO] 10.244.0.4:52287 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129599s
	[INFO] 10.244.0.4:37285 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000186803s
	[INFO] 10.244.1.2:39163 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165716s
	
	
	==> describe nodes <==
	Name:               ha-326307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_23_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:23:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:38:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-326307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 2616418f44a84ee78b49dce19e95d1fb
	  System UUID:                9c3f30ed-68b2-4a1c-af95-9031ae210a78
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-m8swj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-9j5pw             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 coredns-66bc5c9577-wqvzd             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 etcd-ha-326307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kindnet-gxnzs                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-326307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-326307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-8kxtv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-326307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-326307                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	
	
	Name:               ha-326307-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:38:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:02 +0000   Fri, 19 Sep 2025 22:24:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-326307-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 e4f3b60b3b464269bc193e23d4361613
	  System UUID:                8095cd89-f43b-4d8a-adef-b40d6aaa7ad2
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tfpvf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-326307-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kindnet-mk6pv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-326307-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-326307-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-q8mtj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-326307-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-326307-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        14m   kube-proxy       
	  Normal  RegisteredNode  14m   node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode  14m   node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode  14m   node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	
	
	Name:               ha-326307-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:38:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-326307-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 1434e19b2a274233a619428a76d99322
	  System UUID:                5814a8d4-c435-490f-8e5e-a8b038e01be7
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-jdczt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-326307-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-dmxl8                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-326307-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-326307-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-ws89d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-326307-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-326307-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  14m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	
	
	==> dmesg <==
	[Sep19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001853] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001006] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.090013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.467231] i8042: Warning: Keylock active
	[  +0.009747] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004210] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001075] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000901] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000908] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001042] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001465] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.534799] block sda: the capability attribute has been deprecated.
	[  +0.099617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027269] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.089616] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [e5c59a6abe97751de42afd27010936d1c3a401fad6cd730e75a1692a895b4fbc] <==
	{"level":"warn","ts":"2025-09-19T22:24:25.447984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:32950","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:24:25.491427Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(6130034673728934350 12593026477526642892 16449250771884659557)"}
	{"level":"info","ts":"2025-09-19T22:24:25.491593Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:24:25.491634Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:24:25.493734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:32956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:25.530775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:32980","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:24:25.607668Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"e4477a6cd7815365","bytes":946167,"size":"946 kB","took":"30.009579431s"}
	{"level":"info","ts":"2025-09-19T22:24:29.797825Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:31.923615Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:35.871798Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:53.749925Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:55.314881Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"5512420eb470d1ce","bytes":1356311,"size":"1.4 MB","took":"30.015547589s"}
	{"level":"info","ts":"2025-09-19T22:33:30.750666Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1558}
	{"level":"info","ts":"2025-09-19T22:33:30.775074Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1558,"took":"23.935678ms","hash":623549535,"current-db-size-bytes":4292608,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":2306048,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-09-19T22:33:30.775132Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":623549535,"revision":1558,"compact-revision":-1}
	{"level":"info","ts":"2025-09-19T22:37:33.574674Z","caller":"traceutil/trace.go:172","msg":"trace[1629775233] transaction","detail":"{read_only:false; response_revision:2889; number_of_response:1; }","duration":"112.632235ms","start":"2025-09-19T22:37:33.462006Z","end":"2025-09-19T22:37:33.574639Z","steps":["trace[1629775233] 'process raft request'  (duration: 112.400333ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:37:33.947726Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.776182ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040082596420208 > lease_revoke:<id:51ce99641422bfa2>","response":"size:29"}
	{"level":"info","ts":"2025-09-19T22:37:33.947978Z","caller":"traceutil/trace.go:172","msg":"trace[2038413] transaction","detail":"{read_only:false; response_revision:2890; number_of_response:1; }","duration":"121.321226ms","start":"2025-09-19T22:37:33.826642Z","end":"2025-09-19T22:37:33.947963Z","steps":["trace[2038413] 'process raft request'  (duration: 121.201718ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:38:29.307140Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","error":"unexpected EOF"}
	{"level":"warn","ts":"2025-09-19T22:38:29.307363Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"e4477a6cd7815365","error":"failed to read e4477a6cd7815365 on stream MsgApp v2 (unexpected EOF)"}
	{"level":"warn","ts":"2025-09-19T22:38:29.307104Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","error":"unexpected EOF"}
	{"level":"warn","ts":"2025-09-19T22:38:29.340532Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365"}
	{"level":"info","ts":"2025-09-19T22:38:30.757614Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2300}
	{"level":"info","ts":"2025-09-19T22:38:30.775597Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2300,"took":"17.403191ms","hash":1032086589,"current-db-size-bytes":4292608,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":1912832,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-19T22:38:30.775646Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1032086589,"revision":2300,"compact-revision":1558}
	
	
	==> kernel <==
	 22:38:31 up  1:20,  0 users,  load average: 1.37, 0.86, 0.80
	Linux ha-326307 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [365cc00c2e009eeed7e71d1202a4d406c12f0d9faee38762ba691eb2d7c71f89] <==
	I0919 22:37:50.999094       1 main.go:301] handling current node
	I0919 22:38:00.998283       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:38:00.998315       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:38:00.998535       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:38:00.998550       1 main.go:301] handling current node
	I0919 22:38:00.998563       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:38:00.998569       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:38:10.990811       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:38:10.990869       1 main.go:301] handling current node
	I0919 22:38:10.990889       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:38:10.990896       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:38:10.991255       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:38:10.991276       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:38:20.990331       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:38:20.990440       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:38:20.990672       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:38:20.990688       1 main.go:301] handling current node
	I0919 22:38:20.990700       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:38:20.990705       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:38:30.996272       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:38:30.996311       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:38:30.996525       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:38:30.996542       1 main.go:301] handling current node
	I0919 22:38:30.996557       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:38:30.996563       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e80d65e3c7c18da87f7fa003e39382f5a4285ba4782fc295197421c6b882a161] <==
	I0919 22:32:22.110278       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:33:31.733595       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:33:36.316232       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:33:41.440724       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:43.430235       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:04.843923       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:47.576277       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:36:07.778568       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:37:07.288814       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:37:22.531524       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43412: use of closed network connection
	E0919 22:37:22.776721       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43434: use of closed network connection
	E0919 22:37:22.970082       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43448: use of closed network connection
	E0919 22:37:23.110093       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43464: use of closed network connection
	E0919 22:37:23.308629       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43484: use of closed network connection
	E0919 22:37:23.494833       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43500: use of closed network connection
	E0919 22:37:23.634448       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43520: use of closed network connection
	E0919 22:37:23.803885       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43532: use of closed network connection
	E0919 22:37:23.968210       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43546: use of closed network connection
	E0919 22:37:26.548300       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43614: use of closed network connection
	E0919 22:37:26.721861       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43630: use of closed network connection
	E0919 22:37:26.901556       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43648: use of closed network connection
	E0919 22:37:27.077249       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43672: use of closed network connection
	E0919 22:37:27.253310       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43700: use of closed network connection
	I0919 22:37:36.706481       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:38:20.868281       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [05ab0247624a7f8ffa6bc948e3abc3adc49911c297291eb0a6dd42e3df39f4cd] <==
	I0919 22:23:38.744532       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:23:38.744726       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0919 22:23:38.744739       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0919 22:23:38.744729       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:23:38.744737       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:23:38.744759       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 22:23:38.745195       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 22:23:38.745255       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:23:38.746448       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:23:38.748706       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 22:23:38.750017       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.750086       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:23:38.751270       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 22:23:38.760899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 22:23:38.760926       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 22:23:38.760971       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 22:23:38.765332       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.771790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:08.307746       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m02\" does not exist"
	I0919 22:24:08.319829       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:24:08.699971       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m02"
	E0919 22:24:31.036531       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8ztpb failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8ztpb\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:24:31.706808       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m03\" does not exist"
	I0919 22:24:31.736561       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:24:33.715916       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m03"
	
	
	==> kube-proxy [bd9e41958ffbbb27dab3d180a56fb27df36a1a1896db3a6f322a8aabcda57677] <==
	I0919 22:23:40.183862       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:23:40.251957       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:23:40.353105       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:23:40.353291       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:23:40.353503       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:23:40.383440       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:23:40.383522       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:23:40.391534       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:23:40.391999       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:23:40.392045       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:23:40.394189       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:23:40.394304       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:23:40.394470       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:23:40.394480       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:23:40.394266       1 config.go:200] "Starting service config controller"
	I0919 22:23:40.394506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:23:40.394279       1 config.go:309] "Starting node config controller"
	I0919 22:23:40.394533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:23:40.394540       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:23:40.494617       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:23:40.494643       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:23:40.494649       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [456a0c3cbf5ce028b9cbac658728c1fee13ad8e2659bfa0c625cd685d711c708] <==
	E0919 22:23:33.115040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:23:33.129278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:23:33.194774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:23:33.337699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0919 22:23:35.055089       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:24:08.346116       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:08.346301       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:08.365410       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-78xs2" node="ha-326307-m02"
	E0919 22:24:08.365600       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="kube-system/kindnet-78xs2"
	E0919 22:24:08.379248       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"kindnet-78xs2\" not found" pod="kube-system/kindnet-78xs2"
	E0919 22:24:10.002296       1 schedule_one.go:975] "Scheduler cache AssumePod failed" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:10.002334       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	I0919 22:24:10.002368       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:31.751287       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-pnj9r" node="ha-326307-m03"
	E0919 22:24:31.751375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-pnj9r"
	E0919 22:24:31.887089       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:31.887576       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 173e48ec-ef56-4824-9f55-a04b199b7943(kube-system/kindnet-qxwpq) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	E0919 22:24:31.887605       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	I0919 22:24:31.888969       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:35.828083       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-nzzlb" node="ha-326307-m03"
	E0919 22:24:35.828187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-nzzlb"
	E0919 22:24:35.839864       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	E0919 22:24:35.839940       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ba4fd407-2e93-4324-ab2d-4f192d79fdf5(kube-system/kindnet-dmxl8) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	E0919 22:24:35.839964       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	I0919 22:24:35.841757       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	
	
	==> kubelet <==
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638035    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-cni-cfg\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638087    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-xtables-lock\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638115    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70be5fcc-7ab6-4eb1-870d-988fee1a01bb-kube-proxy\") pod \"kube-proxy-8kxtv\" (UID: \"70be5fcc-7ab6-4eb1-870d-988fee1a01bb\") " pod="kube-system/kube-proxy-8kxtv"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140870    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64376c4d-1b82-490d-887d-7f628b134014-config-volume\") pod \"coredns-66bc5c9577-wqvzd\" (UID: \"64376c4d-1b82-490d-887d-7f628b134014\") " pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140945    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d073e38-b63e-494d-bda0-3dde372a950b-config-volume\") pod \"coredns-66bc5c9577-9j5pw\" (UID: \"7d073e38-b63e-494d-bda0-3dde372a950b\") " pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140976    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkhk\" (UniqueName: \"kubernetes.io/projected/64376c4d-1b82-490d-887d-7f628b134014-kube-api-access-8tkhk\") pod \"coredns-66bc5c9577-wqvzd\" (UID: \"64376c4d-1b82-490d-887d-7f628b134014\") " pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.141004    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gmbw\" (UniqueName: \"kubernetes.io/projected/7d073e38-b63e-494d-bda0-3dde372a950b-kube-api-access-8gmbw\") pod \"coredns-66bc5c9577-9j5pw\" (UID: \"7d073e38-b63e-494d-bda0-3dde372a950b\") " pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319752    1670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\""
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319858    1670 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\"" pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319884    1670 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\"" pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319966    1670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wqvzd_kube-system(64376c4d-1b82-490d-887d-7f628b134014)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wqvzd_kube-system(64376c4d-1b82-490d-887d-7f628b134014)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\\\": failed to find network info for sandbox \\\"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\\\"\"" pod="kube-system/coredns-66bc5c9577-wqvzd" podUID="64376c4d-1b82-490d-887d-7f628b134014"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332044    1670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\""
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332130    1670 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\"" pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332205    1670 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\"" pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332288    1670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9j5pw_kube-system(7d073e38-b63e-494d-bda0-3dde372a950b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9j5pw_kube-system(7d073e38-b63e-494d-bda0-3dde372a950b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\\\": failed to find network info for sandbox \\\"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\\\"\"" pod="kube-system/coredns-66bc5c9577-9j5pw" podUID="7d073e38-b63e-494d-bda0-3dde372a950b"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.543914    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cafe04c6-2dce-4b93-b6d1-205efc39b360-tmp\") pod \"storage-provisioner\" (UID: \"cafe04c6-2dce-4b93-b6d1-205efc39b360\") " pod="kube-system/storage-provisioner"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.543969    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47vqf\" (UniqueName: \"kubernetes.io/projected/cafe04c6-2dce-4b93-b6d1-205efc39b360-kube-api-access-47vqf\") pod \"storage-provisioner\" (UID: \"cafe04c6-2dce-4b93-b6d1-205efc39b360\") " pod="kube-system/storage-provisioner"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.684901    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gxnzs" podStartSLOduration=1.68487896 podStartE2EDuration="1.68487896s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:40.684630982 +0000 UTC m=+6.151051272" watchObservedRunningTime="2025-09-19 22:23:40.68487896 +0000 UTC m=+6.151299251"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.685802    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8kxtv" podStartSLOduration=1.685781067 podStartE2EDuration="1.685781067s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:40.670987608 +0000 UTC m=+6.137407898" watchObservedRunningTime="2025-09-19 22:23:40.685781067 +0000 UTC m=+6.152201360"
	Sep 19 22:23:41 ha-326307 kubelet[1670]: I0919 22:23:41.676063    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.676036489 podStartE2EDuration="1.676036489s" podCreationTimestamp="2025-09-19 22:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:41.675998333 +0000 UTC m=+7.142418624" watchObservedRunningTime="2025-09-19 22:23:41.676036489 +0000 UTC m=+7.142456778"
	Sep 19 22:23:45 ha-326307 kubelet[1670]: I0919 22:23:45.164667    1670 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:23:45 ha-326307 kubelet[1670]: I0919 22:23:45.165981    1670 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:23:52 ha-326307 kubelet[1670]: I0919 22:23:52.703916    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wqvzd" podStartSLOduration=13.703896267 podStartE2EDuration="13.703896267s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:52.703429297 +0000 UTC m=+18.169849612" watchObservedRunningTime="2025-09-19 22:23:52.703896267 +0000 UTC m=+18.170316558"
	Sep 19 22:23:56 ha-326307 kubelet[1670]: I0919 22:23:56.724956    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9j5pw" podStartSLOduration=17.724936721 podStartE2EDuration="17.724936721s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:56.724564031 +0000 UTC m=+22.190984322" watchObservedRunningTime="2025-09-19 22:23:56.724936721 +0000 UTC m=+22.191357012"
	Sep 19 22:25:18 ha-326307 kubelet[1670]: I0919 22:25:18.904730    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt2kb\" (UniqueName: \"kubernetes.io/projected/7533a5f9-7c6d-4476-9e03-eb8abe0aadbc-kube-api-access-rt2kb\") pod \"busybox-7b57f96db7-m8swj\" (UID: \"7533a5f9-7c6d-4476-9e03-eb8abe0aadbc\") " pod="default/busybox-7b57f96db7-m8swj"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-326307 -n ha-326307
helpers_test.go:269: (dbg) Run:  kubectl --context ha-326307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-jdczt
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-326307 describe pod busybox-7b57f96db7-jdczt
helpers_test.go:290: (dbg) kubectl --context ha-326307 describe pod busybox-7b57f96db7-jdczt:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-jdczt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ha-326307-m03/192.168.49.4
	Start Time:       Fri, 19 Sep 2025 22:25:18 +0000
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwg8l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xwg8l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                  Age                  From               Message
	  ----     ------                  ----                 ----               -------
	  Warning  FailedScheduling        13m                  default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-jdczt": pod busybox-7b57f96db7-jdczt is already assigned to node "ha-326307-m03"
	  Warning  FailedScheduling        13m                  default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-jdczt": pod busybox-7b57f96db7-jdczt is already assigned to node "ha-326307-m03"
	  Normal   Scheduled               13m                  default-scheduler  Successfully assigned default/busybox-7b57f96db7-jdczt to ha-326307-m03
	  Warning  FailedCreatePodSandBox  13m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f949ef20a496c1ef4510b9586bfdf0aa02ea1ca9948f762b5576ef36acab80c9": failed to find network info for sandbox "f949ef20a496c1ef4510b9586bfdf0aa02ea1ca9948f762b5576ef36acab80c9"
	  Warning  FailedCreatePodSandBox  13m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "306b20c8e47aaeb0b6ae068e406157020ceddab45da2b4f2ab7d80c0e47f4391": failed to find network info for sandbox "306b20c8e47aaeb0b6ae068e406157020ceddab45da2b4f2ab7d80c0e47f4391"
	  Warning  FailedCreatePodSandBox  12m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e6c50e24733dc1514dd610f1c51f99bc1a57d10929036ae887a87c4b187b9ac1": failed to find network info for sandbox "e6c50e24733dc1514dd610f1c51f99bc1a57d10929036ae887a87c4b187b9ac1"
	  Warning  FailedCreatePodSandBox  12m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9d4e7715d1c071862624112264db649229347a018044c9075df60fb9940c8e8a": failed to find network info for sandbox "9d4e7715d1c071862624112264db649229347a018044c9075df60fb9940c8e8a"
	  Warning  FailedCreatePodSandBox  12m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d188384b76e1cf43ce05a368351c54023e455d3fd0fddf79dc0d717558b93ee6": failed to find network info for sandbox "d188384b76e1cf43ce05a368351c54023e455d3fd0fddf79dc0d717558b93ee6"
	  Warning  FailedCreatePodSandBox  12m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "726dbde7347664ddd373a329867d125e92a7173ca43b01448ce154579a81a0bb": failed to find network info for sandbox "726dbde7347664ddd373a329867d125e92a7173ca43b01448ce154579a81a0bb"
	  Warning  FailedCreatePodSandBox  12m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e45f823e38e8e10ed14077a52e1750763b0c366d8a775bbf53f656c10861f185": failed to find network info for sandbox "e45f823e38e8e10ed14077a52e1750763b0c366d8a775bbf53f656c10861f185"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "92ae39dcfd89289f5a5fdc5ae0c23196a91b58214916aead7f95620a2697c009": failed to find network info for sandbox "92ae39dcfd89289f5a5fdc5ae0c23196a91b58214916aead7f95620a2697c009"
	  Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "98a14fefd702a6a8ff4d95d8bebac62053e439ee134d840d76e175ba4e8c45d6": failed to find network info for sandbox "98a14fefd702a6a8ff4d95d8bebac62053e439ee134d840d76e175ba4e8c45d6"
	  Warning  FailedCreatePodSandBox  3m6s (x39 over 11m)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "68cb483b1808e73ea325cca055c7a7f1bd2a591a81aa8b2bcb8cb96560fd08b2": failed to find network info for sandbox "68cb483b1808e73ea325cca055c7a7f1bd2a591a81aa8b2bcb8cb96560fd08b2"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (14.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (66.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 node start m02 --alsologtostderr -v 5
E0919 22:38:34.776444   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-326307 node start m02 --alsologtostderr -v 5: (8.553087235s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5: exit status 7 (760.165799ms)

                                                
                                                
-- stdout --
	ha-326307
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:38:41.716955   98450 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:38:41.717058   98450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:41.717063   98450 out.go:374] Setting ErrFile to fd 2...
	I0919 22:38:41.717067   98450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:41.717311   98450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:38:41.717478   98450 out.go:368] Setting JSON to false
	I0919 22:38:41.717497   98450 mustload.go:65] Loading cluster: ha-326307
	I0919 22:38:41.717675   98450 notify.go:220] Checking for updates...
	I0919 22:38:41.717965   98450 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:38:41.717995   98450 status.go:174] checking status of ha-326307 ...
	I0919 22:38:41.718511   98450 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:38:41.742225   98450 status.go:371] ha-326307 host status = "Running" (err=<nil>)
	I0919 22:38:41.742254   98450 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:38:41.742514   98450 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:38:41.763714   98450 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:38:41.764073   98450 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:41.764143   98450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:38:41.785084   98450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:38:41.880568   98450 ssh_runner.go:195] Run: systemctl --version
	I0919 22:38:41.887438   98450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:41.902437   98450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:38:41.964841   98450 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:38:41.954116348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:38:41.965461   98450 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:41.965490   98450 api_server.go:166] Checking apiserver status ...
	I0919 22:38:41.965523   98450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:41.978339   98450 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0919 22:38:41.989717   98450 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:41.989775   98450 ssh_runner.go:195] Run: ls
	I0919 22:38:41.994071   98450 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:41.998599   98450 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:41.998626   98450 status.go:463] ha-326307 apiserver status = Running (err=<nil>)
	I0919 22:38:41.998637   98450 status.go:176] ha-326307 status: &{Name:ha-326307 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:41.998670   98450 status.go:174] checking status of ha-326307-m02 ...
	I0919 22:38:41.998919   98450 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:38:42.018729   98450 status.go:371] ha-326307-m02 host status = "Running" (err=<nil>)
	I0919 22:38:42.018756   98450 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:38:42.019028   98450 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:38:42.038817   98450 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:38:42.039224   98450 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:42.039275   98450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:38:42.061051   98450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:38:42.157051   98450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:42.170601   98450 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:42.170632   98450 api_server.go:166] Checking apiserver status ...
	I0919 22:38:42.170691   98450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:42.183746   98450 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup
	W0919 22:38:42.194477   98450 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:42.194542   98450 ssh_runner.go:195] Run: ls
	I0919 22:38:42.198887   98450 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:42.203262   98450 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:42.203288   98450 status.go:463] ha-326307-m02 apiserver status = Running (err=<nil>)
	I0919 22:38:42.203296   98450 status.go:176] ha-326307-m02 status: &{Name:ha-326307-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:42.203321   98450 status.go:174] checking status of ha-326307-m03 ...
	I0919 22:38:42.203590   98450 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:38:42.222624   98450 status.go:371] ha-326307-m03 host status = "Running" (err=<nil>)
	I0919 22:38:42.222647   98450 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:38:42.222926   98450 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:38:42.241559   98450 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:38:42.241860   98450 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:42.241906   98450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:38:42.261523   98450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:38:42.357247   98450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:42.370498   98450 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:42.370525   98450 api_server.go:166] Checking apiserver status ...
	I0919 22:38:42.370567   98450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:42.383811   98450 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	W0919 22:38:42.394704   98450 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:42.394753   98450 ssh_runner.go:195] Run: ls
	I0919 22:38:42.398875   98450 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:42.403571   98450 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:42.403597   98450 status.go:463] ha-326307-m03 apiserver status = Running (err=<nil>)
	I0919 22:38:42.403605   98450 status.go:176] ha-326307-m03 status: &{Name:ha-326307-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:42.403667   98450 status.go:174] checking status of ha-326307-m04 ...
	I0919 22:38:42.403901   98450 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:38:42.424362   98450 status.go:371] ha-326307-m04 host status = "Stopped" (err=<nil>)
	I0919 22:38:42.424394   98450 status.go:384] host is not running, skipping remaining checks
	I0919 22:38:42.424422   98450 status.go:176] ha-326307-m04 status: &{Name:ha-326307-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:38:42.429492   18210 retry.go:31] will retry after 1.13355726s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5: exit status 7 (753.485636ms)

                                                
                                                
-- stdout --
	ha-326307
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:38:43.610118   98662 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:38:43.610282   98662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:43.610292   98662 out.go:374] Setting ErrFile to fd 2...
	I0919 22:38:43.610296   98662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:43.610490   98662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:38:43.610680   98662 out.go:368] Setting JSON to false
	I0919 22:38:43.610701   98662 mustload.go:65] Loading cluster: ha-326307
	I0919 22:38:43.610768   98662 notify.go:220] Checking for updates...
	I0919 22:38:43.611073   98662 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:38:43.611092   98662 status.go:174] checking status of ha-326307 ...
	I0919 22:38:43.611555   98662 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:38:43.631748   98662 status.go:371] ha-326307 host status = "Running" (err=<nil>)
	I0919 22:38:43.631781   98662 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:38:43.632057   98662 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:38:43.652952   98662 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:38:43.653281   98662 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:43.653325   98662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:38:43.675758   98662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:38:43.771317   98662 ssh_runner.go:195] Run: systemctl --version
	I0919 22:38:43.776510   98662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:43.790424   98662 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:38:43.850262   98662 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:38:43.839693543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:38:43.850773   98662 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:43.850801   98662 api_server.go:166] Checking apiserver status ...
	I0919 22:38:43.850832   98662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:43.863669   98662 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0919 22:38:43.874673   98662 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:43.874741   98662 ssh_runner.go:195] Run: ls
	I0919 22:38:43.878906   98662 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:43.883470   98662 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:43.883495   98662 status.go:463] ha-326307 apiserver status = Running (err=<nil>)
	I0919 22:38:43.883504   98662 status.go:176] ha-326307 status: &{Name:ha-326307 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:43.883532   98662 status.go:174] checking status of ha-326307-m02 ...
	I0919 22:38:43.883776   98662 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:38:43.905202   98662 status.go:371] ha-326307-m02 host status = "Running" (err=<nil>)
	I0919 22:38:43.905233   98662 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:38:43.905509   98662 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:38:43.926929   98662 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:38:43.927209   98662 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:43.927247   98662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:38:43.947905   98662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:38:44.044963   98662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:44.058579   98662 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:44.058608   98662 api_server.go:166] Checking apiserver status ...
	I0919 22:38:44.058640   98662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:44.071232   98662 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup
	W0919 22:38:44.082411   98662 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:44.082484   98662 ssh_runner.go:195] Run: ls
	I0919 22:38:44.086694   98662 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:44.091037   98662 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:44.091061   98662 status.go:463] ha-326307-m02 apiserver status = Running (err=<nil>)
	I0919 22:38:44.091079   98662 status.go:176] ha-326307-m02 status: &{Name:ha-326307-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:44.091102   98662 status.go:174] checking status of ha-326307-m03 ...
	I0919 22:38:44.091392   98662 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:38:44.111612   98662 status.go:371] ha-326307-m03 host status = "Running" (err=<nil>)
	I0919 22:38:44.111638   98662 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:38:44.111911   98662 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:38:44.131250   98662 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:38:44.131516   98662 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:44.131551   98662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:38:44.150784   98662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:38:44.246484   98662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:44.258764   98662 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:44.258789   98662 api_server.go:166] Checking apiserver status ...
	I0919 22:38:44.258826   98662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:44.270944   98662 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	W0919 22:38:44.283115   98662 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:44.283242   98662 ssh_runner.go:195] Run: ls
	I0919 22:38:44.287504   98662 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:44.291596   98662 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:44.291629   98662 status.go:463] ha-326307-m03 apiserver status = Running (err=<nil>)
	I0919 22:38:44.291643   98662 status.go:176] ha-326307-m03 status: &{Name:ha-326307-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:44.291663   98662 status.go:174] checking status of ha-326307-m04 ...
	I0919 22:38:44.291956   98662 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:38:44.312701   98662 status.go:371] ha-326307-m04 host status = "Stopped" (err=<nil>)
	I0919 22:38:44.312728   98662 status.go:384] host is not running, skipping remaining checks
	I0919 22:38:44.312736   98662 status.go:176] ha-326307-m04 status: &{Name:ha-326307-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:38:44.317618   18210 retry.go:31] will retry after 1.179376049s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5: exit status 7 (778.219186ms)

                                                
                                                
-- stdout --
	ha-326307
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:38:45.543866   98898 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:38:45.544146   98898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:45.544178   98898 out.go:374] Setting ErrFile to fd 2...
	I0919 22:38:45.544184   98898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:45.544436   98898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:38:45.544626   98898 out.go:368] Setting JSON to false
	I0919 22:38:45.544648   98898 mustload.go:65] Loading cluster: ha-326307
	I0919 22:38:45.544832   98898 notify.go:220] Checking for updates...
	I0919 22:38:45.545176   98898 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:38:45.545208   98898 status.go:174] checking status of ha-326307 ...
	I0919 22:38:45.545823   98898 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:38:45.567429   98898 status.go:371] ha-326307 host status = "Running" (err=<nil>)
	I0919 22:38:45.567456   98898 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:38:45.567795   98898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:38:45.590588   98898 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:38:45.590839   98898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:45.590885   98898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:38:45.611361   98898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:38:45.710137   98898 ssh_runner.go:195] Run: systemctl --version
	I0919 22:38:45.715789   98898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:45.730005   98898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:38:45.798710   98898 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:38:45.785324553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:38:45.799415   98898 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:45.799451   98898 api_server.go:166] Checking apiserver status ...
	I0919 22:38:45.799496   98898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:45.812014   98898 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0919 22:38:45.823989   98898 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:45.824046   98898 ssh_runner.go:195] Run: ls
	I0919 22:38:45.828822   98898 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:45.833067   98898 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:45.833100   98898 status.go:463] ha-326307 apiserver status = Running (err=<nil>)
	I0919 22:38:45.833112   98898 status.go:176] ha-326307 status: &{Name:ha-326307 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:45.833127   98898 status.go:174] checking status of ha-326307-m02 ...
	I0919 22:38:45.833448   98898 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:38:45.853617   98898 status.go:371] ha-326307-m02 host status = "Running" (err=<nil>)
	I0919 22:38:45.853645   98898 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:38:45.853898   98898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:38:45.874386   98898 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:38:45.874633   98898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:45.874674   98898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:38:45.895777   98898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:38:45.993464   98898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:46.008118   98898 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:46.008142   98898 api_server.go:166] Checking apiserver status ...
	I0919 22:38:46.008194   98898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:46.022339   98898 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup
	W0919 22:38:46.034259   98898 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:46.034306   98898 ssh_runner.go:195] Run: ls
	I0919 22:38:46.038784   98898 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:46.043571   98898 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:46.043603   98898 status.go:463] ha-326307-m02 apiserver status = Running (err=<nil>)
	I0919 22:38:46.043615   98898 status.go:176] ha-326307-m02 status: &{Name:ha-326307-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:46.043633   98898 status.go:174] checking status of ha-326307-m03 ...
	I0919 22:38:46.043944   98898 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:38:46.066559   98898 status.go:371] ha-326307-m03 host status = "Running" (err=<nil>)
	I0919 22:38:46.066582   98898 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:38:46.066870   98898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:38:46.086900   98898 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:38:46.087262   98898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:46.087314   98898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:38:46.107675   98898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:38:46.204518   98898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:46.219116   98898 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:46.219151   98898 api_server.go:166] Checking apiserver status ...
	I0919 22:38:46.219233   98898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:46.231990   98898 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	W0919 22:38:46.243178   98898 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:46.243241   98898 ssh_runner.go:195] Run: ls
	I0919 22:38:46.247289   98898 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:46.251586   98898 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:46.251610   98898 status.go:463] ha-326307-m03 apiserver status = Running (err=<nil>)
	I0919 22:38:46.251619   98898 status.go:176] ha-326307-m03 status: &{Name:ha-326307-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:46.251643   98898 status.go:174] checking status of ha-326307-m04 ...
	I0919 22:38:46.251868   98898 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:38:46.271824   98898 status.go:371] ha-326307-m04 host status = "Stopped" (err=<nil>)
	I0919 22:38:46.271856   98898 status.go:384] host is not running, skipping remaining checks
	I0919 22:38:46.271865   98898 status.go:176] ha-326307-m04 status: &{Name:ha-326307-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:38:46.276665   18210 retry.go:31] will retry after 1.590263865s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5: exit status 7 (757.019027ms)

                                                
                                                
-- stdout --
	ha-326307
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:38:47.911146   99125 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:38:47.911312   99125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:47.911322   99125 out.go:374] Setting ErrFile to fd 2...
	I0919 22:38:47.911326   99125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:47.911525   99125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:38:47.911703   99125 out.go:368] Setting JSON to false
	I0919 22:38:47.911722   99125 mustload.go:65] Loading cluster: ha-326307
	I0919 22:38:47.911760   99125 notify.go:220] Checking for updates...
	I0919 22:38:47.912062   99125 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:38:47.912079   99125 status.go:174] checking status of ha-326307 ...
	I0919 22:38:47.912585   99125 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:38:47.933495   99125 status.go:371] ha-326307 host status = "Running" (err=<nil>)
	I0919 22:38:47.933517   99125 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:38:47.933771   99125 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:38:47.953541   99125 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:38:47.953788   99125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:47.953828   99125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:38:47.973548   99125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:38:48.067968   99125 ssh_runner.go:195] Run: systemctl --version
	I0919 22:38:48.072932   99125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:48.088227   99125 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:38:48.150327   99125 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:38:48.139777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:38:48.150830   99125 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:48.150855   99125 api_server.go:166] Checking apiserver status ...
	I0919 22:38:48.150884   99125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:48.163074   99125 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0919 22:38:48.174285   99125 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:48.174332   99125 ssh_runner.go:195] Run: ls
	I0919 22:38:48.178323   99125 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:48.184667   99125 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:48.184708   99125 status.go:463] ha-326307 apiserver status = Running (err=<nil>)
	I0919 22:38:48.184717   99125 status.go:176] ha-326307 status: &{Name:ha-326307 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:48.184736   99125 status.go:174] checking status of ha-326307-m02 ...
	I0919 22:38:48.185004   99125 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:38:48.205731   99125 status.go:371] ha-326307-m02 host status = "Running" (err=<nil>)
	I0919 22:38:48.205753   99125 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:38:48.205993   99125 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:38:48.225319   99125 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:38:48.225568   99125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:48.225619   99125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:38:48.247084   99125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:38:48.344740   99125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:48.359501   99125 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:48.359531   99125 api_server.go:166] Checking apiserver status ...
	I0919 22:38:48.359571   99125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:48.372437   99125 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup
	W0919 22:38:48.383723   99125 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:48.383782   99125 ssh_runner.go:195] Run: ls
	I0919 22:38:48.388145   99125 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:48.392834   99125 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:48.392873   99125 status.go:463] ha-326307-m02 apiserver status = Running (err=<nil>)
	I0919 22:38:48.392881   99125 status.go:176] ha-326307-m02 status: &{Name:ha-326307-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:48.392901   99125 status.go:174] checking status of ha-326307-m03 ...
	I0919 22:38:48.393215   99125 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:38:48.412689   99125 status.go:371] ha-326307-m03 host status = "Running" (err=<nil>)
	I0919 22:38:48.412719   99125 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:38:48.412962   99125 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:38:48.433314   99125 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:38:48.433606   99125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:48.433642   99125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:38:48.453374   99125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:38:48.552046   99125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:48.566509   99125 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:48.566534   99125 api_server.go:166] Checking apiserver status ...
	I0919 22:38:48.566577   99125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:48.579837   99125 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	W0919 22:38:48.591182   99125 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:48.591255   99125 ssh_runner.go:195] Run: ls
	I0919 22:38:48.596126   99125 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:48.600521   99125 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:48.600551   99125 status.go:463] ha-326307-m03 apiserver status = Running (err=<nil>)
	I0919 22:38:48.600560   99125 status.go:176] ha-326307-m03 status: &{Name:ha-326307-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:48.600581   99125 status.go:174] checking status of ha-326307-m04 ...
	I0919 22:38:48.600870   99125 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:38:48.621260   99125 status.go:371] ha-326307-m04 host status = "Stopped" (err=<nil>)
	I0919 22:38:48.621299   99125 status.go:384] host is not running, skipping remaining checks
	I0919 22:38:48.621306   99125 status.go:176] ha-326307-m04 status: &{Name:ha-326307-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:38:48.625891   18210 retry.go:31] will retry after 2.722507245s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5: exit status 7 (753.449862ms)

                                                
                                                
-- stdout --
	ha-326307
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:38:51.394868   99419 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:38:51.394981   99419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:51.394993   99419 out.go:374] Setting ErrFile to fd 2...
	I0919 22:38:51.394997   99419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:51.395222   99419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:38:51.395434   99419 out.go:368] Setting JSON to false
	I0919 22:38:51.395455   99419 mustload.go:65] Loading cluster: ha-326307
	I0919 22:38:51.395637   99419 notify.go:220] Checking for updates...
	I0919 22:38:51.395877   99419 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:38:51.395901   99419 status.go:174] checking status of ha-326307 ...
	I0919 22:38:51.396539   99419 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:38:51.420241   99419 status.go:371] ha-326307 host status = "Running" (err=<nil>)
	I0919 22:38:51.420322   99419 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:38:51.420721   99419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:38:51.440858   99419 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:38:51.441107   99419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:51.441165   99419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:38:51.459998   99419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:38:51.555812   99419 ssh_runner.go:195] Run: systemctl --version
	I0919 22:38:51.560565   99419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:51.573395   99419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:38:51.631333   99419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:38:51.620627486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:38:51.631949   99419 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:51.631983   99419 api_server.go:166] Checking apiserver status ...
	I0919 22:38:51.632028   99419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:51.646763   99419 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0919 22:38:51.657846   99419 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:51.657902   99419 ssh_runner.go:195] Run: ls
	I0919 22:38:51.662126   99419 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:51.666862   99419 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:51.666892   99419 status.go:463] ha-326307 apiserver status = Running (err=<nil>)
	I0919 22:38:51.666905   99419 status.go:176] ha-326307 status: &{Name:ha-326307 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:51.666925   99419 status.go:174] checking status of ha-326307-m02 ...
	I0919 22:38:51.667199   99419 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:38:51.687079   99419 status.go:371] ha-326307-m02 host status = "Running" (err=<nil>)
	I0919 22:38:51.687108   99419 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:38:51.687415   99419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:38:51.707505   99419 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:38:51.707782   99419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:51.707816   99419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:38:51.727990   99419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:38:51.824275   99419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:51.837288   99419 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:51.837312   99419 api_server.go:166] Checking apiserver status ...
	I0919 22:38:51.837342   99419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:51.850620   99419 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup
	W0919 22:38:51.861935   99419 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:51.861984   99419 ssh_runner.go:195] Run: ls
	I0919 22:38:51.866290   99419 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:51.870854   99419 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:51.870884   99419 status.go:463] ha-326307-m02 apiserver status = Running (err=<nil>)
	I0919 22:38:51.870896   99419 status.go:176] ha-326307-m02 status: &{Name:ha-326307-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:51.870924   99419 status.go:174] checking status of ha-326307-m03 ...
	I0919 22:38:51.871285   99419 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:38:51.890542   99419 status.go:371] ha-326307-m03 host status = "Running" (err=<nil>)
	I0919 22:38:51.890564   99419 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:38:51.890804   99419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:38:51.910049   99419 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:38:51.910322   99419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:51.910357   99419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:38:51.932290   99419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:38:52.028080   99419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:52.042369   99419 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:52.042393   99419 api_server.go:166] Checking apiserver status ...
	I0919 22:38:52.042436   99419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:52.055609   99419 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	W0919 22:38:52.066484   99419 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:52.066539   99419 ssh_runner.go:195] Run: ls
	I0919 22:38:52.070755   99419 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:52.075219   99419 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:52.075243   99419 status.go:463] ha-326307-m03 apiserver status = Running (err=<nil>)
	I0919 22:38:52.075252   99419 status.go:176] ha-326307-m03 status: &{Name:ha-326307-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:52.075267   99419 status.go:174] checking status of ha-326307-m04 ...
	I0919 22:38:52.075497   99419 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:38:52.097031   99419 status.go:371] ha-326307-m04 host status = "Stopped" (err=<nil>)
	I0919 22:38:52.097054   99419 status.go:384] host is not running, skipping remaining checks
	I0919 22:38:52.097060   99419 status.go:176] ha-326307-m04 status: &{Name:ha-326307-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:38:52.103419   18210 retry.go:31] will retry after 7.423202227s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5: exit status 7 (769.620064ms)

                                                
                                                
-- stdout --
	ha-326307
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:38:59.580828   99646 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:38:59.580973   99646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:59.580988   99646 out.go:374] Setting ErrFile to fd 2...
	I0919 22:38:59.580993   99646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:59.581225   99646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:38:59.581420   99646 out.go:368] Setting JSON to false
	I0919 22:38:59.581441   99646 mustload.go:65] Loading cluster: ha-326307
	I0919 22:38:59.581500   99646 notify.go:220] Checking for updates...
	I0919 22:38:59.581987   99646 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:38:59.582019   99646 status.go:174] checking status of ha-326307 ...
	I0919 22:38:59.582577   99646 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:38:59.607965   99646 status.go:371] ha-326307 host status = "Running" (err=<nil>)
	I0919 22:38:59.608026   99646 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:38:59.608437   99646 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:38:59.629570   99646 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:38:59.629832   99646 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:59.629875   99646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:38:59.654185   99646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:38:59.751375   99646 ssh_runner.go:195] Run: systemctl --version
	I0919 22:38:59.756245   99646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:38:59.770016   99646 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:38:59.831555   99646 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:38:59.820546935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:38:59.832318   99646 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:38:59.832355   99646 api_server.go:166] Checking apiserver status ...
	I0919 22:38:59.832403   99646 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:38:59.845429   99646 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0919 22:38:59.857436   99646 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:59.857484   99646 ssh_runner.go:195] Run: ls
	I0919 22:38:59.861871   99646 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:38:59.866429   99646 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:38:59.866455   99646 status.go:463] ha-326307 apiserver status = Running (err=<nil>)
	I0919 22:38:59.866466   99646 status.go:176] ha-326307 status: &{Name:ha-326307 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:38:59.866481   99646 status.go:174] checking status of ha-326307-m02 ...
	I0919 22:38:59.866776   99646 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:38:59.886199   99646 status.go:371] ha-326307-m02 host status = "Running" (err=<nil>)
	I0919 22:38:59.886224   99646 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:38:59.886490   99646 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:38:59.907560   99646 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:38:59.907821   99646 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:59.907854   99646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:38:59.930548   99646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:39:00.025669   99646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:00.038617   99646 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:39:00.038651   99646 api_server.go:166] Checking apiserver status ...
	I0919 22:39:00.038688   99646 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:00.052745   99646 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup
	W0919 22:39:00.063324   99646 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:00.063379   99646 ssh_runner.go:195] Run: ls
	I0919 22:39:00.067235   99646 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:39:00.071690   99646 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:39:00.071718   99646 status.go:463] ha-326307-m02 apiserver status = Running (err=<nil>)
	I0919 22:39:00.071728   99646 status.go:176] ha-326307-m02 status: &{Name:ha-326307-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:39:00.071751   99646 status.go:174] checking status of ha-326307-m03 ...
	I0919 22:39:00.071985   99646 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:39:00.091476   99646 status.go:371] ha-326307-m03 host status = "Running" (err=<nil>)
	I0919 22:39:00.091504   99646 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:39:00.091766   99646 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:39:00.111349   99646 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:39:00.111617   99646 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:00.111659   99646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:39:00.131321   99646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:39:00.226809   99646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:00.240046   99646 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:39:00.240073   99646 api_server.go:166] Checking apiserver status ...
	I0919 22:39:00.240103   99646 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:00.252023   99646 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	W0919 22:39:00.263628   99646 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:00.263684   99646 ssh_runner.go:195] Run: ls
	I0919 22:39:00.267875   99646 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:39:00.272875   99646 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:39:00.272909   99646 status.go:463] ha-326307-m03 apiserver status = Running (err=<nil>)
	I0919 22:39:00.272920   99646 status.go:176] ha-326307-m03 status: &{Name:ha-326307-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:39:00.272936   99646 status.go:174] checking status of ha-326307-m04 ...
	I0919 22:39:00.273335   99646 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:39:00.293005   99646 status.go:371] ha-326307-m04 host status = "Stopped" (err=<nil>)
	I0919 22:39:00.293033   99646 status.go:384] host is not running, skipping remaining checks
	I0919 22:39:00.293042   99646 status.go:176] ha-326307-m04 status: &{Name:ha-326307-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:39:00.298279   18210 retry.go:31] will retry after 4.250810812s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5: exit status 7 (766.003957ms)

                                                
                                                
-- stdout --
	ha-326307
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:39:04.596720   99895 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:39:04.596851   99895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:39:04.596860   99895 out.go:374] Setting ErrFile to fd 2...
	I0919 22:39:04.596864   99895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:39:04.597079   99895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:39:04.597269   99895 out.go:368] Setting JSON to false
	I0919 22:39:04.597288   99895 mustload.go:65] Loading cluster: ha-326307
	I0919 22:39:04.597480   99895 notify.go:220] Checking for updates...
	I0919 22:39:04.597717   99895 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:39:04.597738   99895 status.go:174] checking status of ha-326307 ...
	I0919 22:39:04.598119   99895 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:39:04.618797   99895 status.go:371] ha-326307 host status = "Running" (err=<nil>)
	I0919 22:39:04.618822   99895 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:39:04.619111   99895 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:39:04.640270   99895 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:39:04.640742   99895 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:04.640812   99895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:39:04.662452   99895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:39:04.761415   99895 ssh_runner.go:195] Run: systemctl --version
	I0919 22:39:04.767066   99895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:04.780577   99895 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:39:04.843391   99895 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:39:04.831282364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:39:04.843903   99895 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:39:04.843928   99895 api_server.go:166] Checking apiserver status ...
	I0919 22:39:04.843962   99895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:04.857298   99895 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0919 22:39:04.868771   99895 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:04.868830   99895 ssh_runner.go:195] Run: ls
	I0919 22:39:04.873346   99895 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:39:04.879749   99895 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:39:04.879776   99895 status.go:463] ha-326307 apiserver status = Running (err=<nil>)
	I0919 22:39:04.879788   99895 status.go:176] ha-326307 status: &{Name:ha-326307 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:39:04.879876   99895 status.go:174] checking status of ha-326307-m02 ...
	I0919 22:39:04.880143   99895 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:39:04.900072   99895 status.go:371] ha-326307-m02 host status = "Running" (err=<nil>)
	I0919 22:39:04.900100   99895 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:39:04.900477   99895 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:39:04.920579   99895 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:39:04.920820   99895 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:04.920858   99895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:39:04.940607   99895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:39:05.037230   99895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:05.052475   99895 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:39:05.052502   99895 api_server.go:166] Checking apiserver status ...
	I0919 22:39:05.052535   99895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:05.065141   99895 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup
	W0919 22:39:05.076655   99895 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:05.076719   99895 ssh_runner.go:195] Run: ls
	I0919 22:39:05.081273   99895 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:39:05.085981   99895 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:39:05.086013   99895 status.go:463] ha-326307-m02 apiserver status = Running (err=<nil>)
	I0919 22:39:05.086023   99895 status.go:176] ha-326307-m02 status: &{Name:ha-326307-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:39:05.086053   99895 status.go:174] checking status of ha-326307-m03 ...
	I0919 22:39:05.086391   99895 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:39:05.106214   99895 status.go:371] ha-326307-m03 host status = "Running" (err=<nil>)
	I0919 22:39:05.106237   99895 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:39:05.106524   99895 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:39:05.127267   99895 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:39:05.127589   99895 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:05.127637   99895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:39:05.148605   99895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:39:05.244201   99895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:05.257840   99895 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:39:05.257869   99895 api_server.go:166] Checking apiserver status ...
	I0919 22:39:05.257909   99895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:05.270197   99895 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	W0919 22:39:05.281733   99895 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:05.281800   99895 ssh_runner.go:195] Run: ls
	I0919 22:39:05.286055   99895 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:39:05.290420   99895 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:39:05.290443   99895 status.go:463] ha-326307-m03 apiserver status = Running (err=<nil>)
	I0919 22:39:05.290452   99895 status.go:176] ha-326307-m03 status: &{Name:ha-326307-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:39:05.290468   99895 status.go:174] checking status of ha-326307-m04 ...
	I0919 22:39:05.290750   99895 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:39:05.311850   99895 status.go:371] ha-326307-m04 host status = "Stopped" (err=<nil>)
	I0919 22:39:05.311873   99895 status.go:384] host is not running, skipping remaining checks
	I0919 22:39:05.311879   99895 status.go:176] ha-326307-m04 status: &{Name:ha-326307-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:39:05.317051   18210 retry.go:31] will retry after 11.138446452s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5: exit status 7 (765.211815ms)

                                                
                                                
-- stdout --
	ha-326307
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:39:16.501746  100179 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:39:16.501899  100179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:39:16.501910  100179 out.go:374] Setting ErrFile to fd 2...
	I0919 22:39:16.501914  100179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:39:16.502115  100179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:39:16.502324  100179 out.go:368] Setting JSON to false
	I0919 22:39:16.502348  100179 mustload.go:65] Loading cluster: ha-326307
	I0919 22:39:16.502403  100179 notify.go:220] Checking for updates...
	I0919 22:39:16.502818  100179 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:39:16.502840  100179 status.go:174] checking status of ha-326307 ...
	I0919 22:39:16.503393  100179 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:39:16.526300  100179 status.go:371] ha-326307 host status = "Running" (err=<nil>)
	I0919 22:39:16.526327  100179 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:39:16.526649  100179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:39:16.545729  100179 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:39:16.545978  100179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:16.546013  100179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:39:16.566244  100179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:39:16.663189  100179 ssh_runner.go:195] Run: systemctl --version
	I0919 22:39:16.668820  100179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:16.683367  100179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:39:16.749567  100179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:39:16.737765538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:39:16.750366  100179 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:39:16.750404  100179 api_server.go:166] Checking apiserver status ...
	I0919 22:39:16.750458  100179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:16.764218  100179 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0919 22:39:16.776178  100179 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:16.776248  100179 ssh_runner.go:195] Run: ls
	I0919 22:39:16.781066  100179 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:39:16.786332  100179 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:39:16.786357  100179 status.go:463] ha-326307 apiserver status = Running (err=<nil>)
	I0919 22:39:16.786366  100179 status.go:176] ha-326307 status: &{Name:ha-326307 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:39:16.786381  100179 status.go:174] checking status of ha-326307-m02 ...
	I0919 22:39:16.786620  100179 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:39:16.805797  100179 status.go:371] ha-326307-m02 host status = "Running" (err=<nil>)
	I0919 22:39:16.805823  100179 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:39:16.806087  100179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:39:16.827163  100179 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:39:16.827703  100179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:16.827766  100179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:39:16.848633  100179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:39:16.945193  100179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:16.958749  100179 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:39:16.958778  100179 api_server.go:166] Checking apiserver status ...
	I0919 22:39:16.958812  100179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:16.972328  100179 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup
	W0919 22:39:16.984061  100179 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:16.984107  100179 ssh_runner.go:195] Run: ls
	I0919 22:39:16.988431  100179 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:39:16.993841  100179 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:39:16.993872  100179 status.go:463] ha-326307-m02 apiserver status = Running (err=<nil>)
	I0919 22:39:16.993886  100179 status.go:176] ha-326307-m02 status: &{Name:ha-326307-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:39:16.993907  100179 status.go:174] checking status of ha-326307-m03 ...
	I0919 22:39:16.994292  100179 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:39:17.014718  100179 status.go:371] ha-326307-m03 host status = "Running" (err=<nil>)
	I0919 22:39:17.014745  100179 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:39:17.015029  100179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:39:17.036221  100179 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:39:17.036530  100179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:17.036590  100179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:39:17.056582  100179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:39:17.150749  100179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:17.165274  100179 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:39:17.165299  100179 api_server.go:166] Checking apiserver status ...
	I0919 22:39:17.165339  100179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:17.177455  100179 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	W0919 22:39:17.188374  100179 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:17.188425  100179 ssh_runner.go:195] Run: ls
	I0919 22:39:17.192581  100179 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:39:17.196920  100179 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:39:17.196948  100179 status.go:463] ha-326307-m03 apiserver status = Running (err=<nil>)
	I0919 22:39:17.196957  100179 status.go:176] ha-326307-m03 status: &{Name:ha-326307-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:39:17.196974  100179 status.go:174] checking status of ha-326307-m04 ...
	I0919 22:39:17.197279  100179 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:39:17.216695  100179 status.go:371] ha-326307-m04 host status = "Stopped" (err=<nil>)
	I0919 22:39:17.216715  100179 status.go:384] host is not running, skipping remaining checks
	I0919 22:39:17.216722  100179 status.go:176] ha-326307-m04 status: &{Name:ha-326307-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:39:17.221913   18210 retry.go:31] will retry after 19.34356623s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5: exit status 7 (747.301931ms)

                                                
                                                
-- stdout --
	ha-326307
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:39:36.612817  100555 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:39:36.612958  100555 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:39:36.612970  100555 out.go:374] Setting ErrFile to fd 2...
	I0919 22:39:36.612976  100555 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:39:36.613229  100555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:39:36.613453  100555 out.go:368] Setting JSON to false
	I0919 22:39:36.613476  100555 mustload.go:65] Loading cluster: ha-326307
	I0919 22:39:36.613597  100555 notify.go:220] Checking for updates...
	I0919 22:39:36.614093  100555 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:39:36.614121  100555 status.go:174] checking status of ha-326307 ...
	I0919 22:39:36.614756  100555 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:39:36.637193  100555 status.go:371] ha-326307 host status = "Running" (err=<nil>)
	I0919 22:39:36.637221  100555 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:39:36.637503  100555 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:39:36.658376  100555 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:39:36.658692  100555 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:36.658747  100555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:39:36.678402  100555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:39:36.773845  100555 ssh_runner.go:195] Run: systemctl --version
	I0919 22:39:36.778776  100555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:36.792085  100555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:39:36.852689  100555 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:39:36.840896027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:39:36.853262  100555 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:39:36.853291  100555 api_server.go:166] Checking apiserver status ...
	I0919 22:39:36.853325  100555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:36.866768  100555 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0919 22:39:36.877943  100555 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:36.877990  100555 ssh_runner.go:195] Run: ls
	I0919 22:39:36.882190  100555 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:39:36.888673  100555 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:39:36.888703  100555 status.go:463] ha-326307 apiserver status = Running (err=<nil>)
	I0919 22:39:36.888716  100555 status.go:176] ha-326307 status: &{Name:ha-326307 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:39:36.888736  100555 status.go:174] checking status of ha-326307-m02 ...
	I0919 22:39:36.888993  100555 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:39:36.909098  100555 status.go:371] ha-326307-m02 host status = "Running" (err=<nil>)
	I0919 22:39:36.909122  100555 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:39:36.909407  100555 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:39:36.928357  100555 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:39:36.928610  100555 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:36.928647  100555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:39:36.948678  100555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:39:37.044228  100555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:37.058697  100555 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:39:37.058723  100555 api_server.go:166] Checking apiserver status ...
	I0919 22:39:37.058754  100555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:37.071582  100555 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup
	W0919 22:39:37.082465  100555 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/590/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:37.082533  100555 ssh_runner.go:195] Run: ls
	I0919 22:39:37.087593  100555 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:39:37.092811  100555 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:39:37.092838  100555 status.go:463] ha-326307-m02 apiserver status = Running (err=<nil>)
	I0919 22:39:37.092850  100555 status.go:176] ha-326307-m02 status: &{Name:ha-326307-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:39:37.092886  100555 status.go:174] checking status of ha-326307-m03 ...
	I0919 22:39:37.093130  100555 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:39:37.113316  100555 status.go:371] ha-326307-m03 host status = "Running" (err=<nil>)
	I0919 22:39:37.113339  100555 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:39:37.113620  100555 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:39:37.132060  100555 host.go:66] Checking if "ha-326307-m03" exists ...
	I0919 22:39:37.132417  100555 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:37.132467  100555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:39:37.152057  100555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:39:37.247360  100555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:37.260403  100555 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:39:37.260433  100555 api_server.go:166] Checking apiserver status ...
	I0919 22:39:37.260463  100555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:37.272338  100555 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	W0919 22:39:37.282539  100555 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:37.282594  100555 ssh_runner.go:195] Run: ls
	I0919 22:39:37.286878  100555 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:39:37.291053  100555 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:39:37.291079  100555 status.go:463] ha-326307-m03 apiserver status = Running (err=<nil>)
	I0919 22:39:37.291088  100555 status.go:176] ha-326307-m03 status: &{Name:ha-326307-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:39:37.291104  100555 status.go:174] checking status of ha-326307-m04 ...
	I0919 22:39:37.291376  100555 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:39:37.310104  100555 status.go:371] ha-326307-m04 host status = "Stopped" (err=<nil>)
	I0919 22:39:37.310125  100555 status.go:384] host is not running, skipping remaining checks
	I0919 22:39:37.310130  100555 status.go:176] ha-326307-m04 status: &{Name:ha-326307-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-326307
helpers_test.go:243: (dbg) docker inspect ha-326307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	        "Created": "2025-09-19T22:23:18.619000062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 69921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:23:18.670514121Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hosts",
	        "LogPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3-json.log",
	        "Name": "/ha-326307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-326307:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-326307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	                "LowerDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-326307",
	                "Source": "/var/lib/docker/volumes/ha-326307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-326307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-326307",
	                "name.minikube.sigs.k8s.io": "ha-326307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b9c61cd0152986e2b265b3cf0a7628b1c049e495ce30493b8e54f6b9446115f",
	            "SandboxKey": "/var/run/docker/netns/8b9c61cd0152",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-326307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:80:09:d2:65:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "465af21e2d8d112a34f14f7c3ab89eac4e6e57f582ff8e32b514381f55dd085e",
	                    "EndpointID": "f35735061c65841c2c1ba7f2859db25885582588fa8f2d14e3a015320f6c3fc4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-326307",
	                        "5e0f1fe86b08"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-326307 -n ha-326307
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-326307 logs -n 25: (1.255250681s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ cp      │ ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307:/home/docker/cp-test_ha-326307-m03_ha-326307.txt                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307 sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307.txt                                                │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ cp      │ ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307-m02:/home/docker/cp-test_ha-326307-m03_ha-326307-m02.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m02 sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m02.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ cp      │ ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307-m04:/home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp testdata/cp-test.txt ha-326307-m04:/home/docker/cp-test.txt                                                            │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile865153148/001/cp-test_ha-326307-m04.txt │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307:/home/docker/cp-test_ha-326307-m04_ha-326307.txt                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307.txt                                                │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m02:/home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m02 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m03:/home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ node    │ ha-326307 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ node    │ ha-326307 node start m02 --alsologtostderr -v 5                                                                                     │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:23:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:23:13.527478   69358 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:23:13.527574   69358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:13.527579   69358 out.go:374] Setting ErrFile to fd 2...
	I0919 22:23:13.527586   69358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:13.527823   69358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:23:13.528355   69358 out.go:368] Setting JSON to false
	I0919 22:23:13.529260   69358 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3938,"bootTime":1758316656,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:23:13.529345   69358 start.go:140] virtualization: kvm guest
	I0919 22:23:13.531661   69358 out.go:179] * [ha-326307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:23:13.533198   69358 notify.go:220] Checking for updates...
	I0919 22:23:13.533231   69358 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:23:13.534827   69358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:23:13.536340   69358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:23:13.537773   69358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 22:23:13.539372   69358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:23:13.541189   69358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:23:13.542697   69358 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:23:13.568228   69358 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:23:13.568380   69358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:13.622546   69358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:23:13.612893654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:13.622646   69358 docker.go:318] overlay module found
	I0919 22:23:13.624668   69358 out.go:179] * Using the docker driver based on user configuration
	I0919 22:23:13.626116   69358 start.go:304] selected driver: docker
	I0919 22:23:13.626134   69358 start.go:918] validating driver "docker" against <nil>
	I0919 22:23:13.626147   69358 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:23:13.626725   69358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:13.684385   69358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:23:13.672811393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:13.684569   69358 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:23:13.684775   69358 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:23:13.686618   69358 out.go:179] * Using Docker driver with root privileges
	I0919 22:23:13.687924   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:13.688000   69358 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:23:13.688014   69358 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:23:13.688089   69358 start.go:348] cluster config:
	{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m
0s}
	I0919 22:23:13.689601   69358 out.go:179] * Starting "ha-326307" primary control-plane node in "ha-326307" cluster
	I0919 22:23:13.691305   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:23:13.692823   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:23:13.694304   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:13.694378   69358 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 22:23:13.694398   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:23:13.694426   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:23:13.694515   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:23:13.694533   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:23:13.694981   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:13.695014   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json: {Name:mk9e3af266bcfbabd18624d7d22535c6f1841e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:13.716737   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:23:13.716759   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:23:13.716776   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:23:13.716797   69358 start.go:360] acquireMachinesLock for ha-326307: {Name:mk42b79b90944aab63c8b37c2f94e04ca1ebec1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:23:13.716893   69358 start.go:364] duration metric: took 80.537µs to acquireMachinesLock for "ha-326307"
	I0919 22:23:13.716915   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:13.716974   69358 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:23:13.719062   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:23:13.719317   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:23:13.719352   69358 client.go:168] LocalClient.Create starting
	I0919 22:23:13.719447   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:23:13.719502   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:13.719517   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:13.719580   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:23:13.719600   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:13.719610   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:13.719933   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:23:13.737609   69358 cli_runner.go:211] docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:23:13.737699   69358 network_create.go:284] running [docker network inspect ha-326307] to gather additional debugging logs...
	I0919 22:23:13.737725   69358 cli_runner.go:164] Run: docker network inspect ha-326307
	W0919 22:23:13.755400   69358 cli_runner.go:211] docker network inspect ha-326307 returned with exit code 1
	I0919 22:23:13.755437   69358 network_create.go:287] error running [docker network inspect ha-326307]: docker network inspect ha-326307: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-326307 not found
	I0919 22:23:13.755455   69358 network_create.go:289] output of [docker network inspect ha-326307]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-326307 not found
	
	** /stderr **
	I0919 22:23:13.755563   69358 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:13.774541   69358 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018eb270}
	I0919 22:23:13.774578   69358 network_create.go:124] attempt to create docker network ha-326307 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:23:13.774619   69358 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-326307 ha-326307
	I0919 22:23:13.834699   69358 network_create.go:108] docker network ha-326307 192.168.49.0/24 created
	I0919 22:23:13.834730   69358 kic.go:121] calculated static IP "192.168.49.2" for the "ha-326307" container
	I0919 22:23:13.834799   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:23:13.852316   69358 cli_runner.go:164] Run: docker volume create ha-326307 --label name.minikube.sigs.k8s.io=ha-326307 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:23:13.872969   69358 oci.go:103] Successfully created a docker volume ha-326307
	I0919 22:23:13.873115   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307 --entrypoint /usr/bin/test -v ha-326307:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:23:14.277718   69358 oci.go:107] Successfully prepared a docker volume ha-326307
	I0919 22:23:14.277762   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:14.277789   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:23:14.277852   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:23:18.547851   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.269954037s)
	I0919 22:23:18.547886   69358 kic.go:203] duration metric: took 4.270092787s to extract preloaded images to volume ...
	W0919 22:23:18.548002   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:23:18.548044   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:23:18.548091   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:23:18.602395   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307 --name ha-326307 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307 --network ha-326307 --ip 192.168.49.2 --volume ha-326307:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:23:18.902433   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Running}}
	I0919 22:23:18.923488   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:18.945324   69358 cli_runner.go:164] Run: docker exec ha-326307 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:23:18.998198   69358 oci.go:144] the created container "ha-326307" has a running status.
	I0919 22:23:18.998254   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa...
	I0919 22:23:19.305578   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:23:19.305639   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:23:19.338987   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:19.361057   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:23:19.361077   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:23:19.423644   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:19.446710   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:23:19.446815   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.468914   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.469178   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.469194   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:23:19.609654   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:23:19.609685   69358 ubuntu.go:182] provisioning hostname "ha-326307"
	I0919 22:23:19.609806   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.631352   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.631769   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.631790   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307 && echo "ha-326307" | sudo tee /etc/hostname
	I0919 22:23:19.783770   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:23:19.783868   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:19.802757   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:19.802967   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:23:19.802990   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:23:19.942778   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:23:19.942811   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:23:19.942925   69358 ubuntu.go:190] setting up certificates
	I0919 22:23:19.942949   69358 provision.go:84] configureAuth start
	I0919 22:23:19.943010   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:19.963444   69358 provision.go:143] copyHostCerts
	I0919 22:23:19.963491   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:19.963531   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:23:19.963541   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:19.963629   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:23:19.963778   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:19.963807   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:23:19.963811   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:19.963862   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:23:19.963997   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:19.964030   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:23:19.964040   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:19.964080   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:23:19.964187   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307 san=[127.0.0.1 192.168.49.2 ha-326307 localhost minikube]
	I0919 22:23:20.747311   69358 provision.go:177] copyRemoteCerts
	I0919 22:23:20.747377   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:23:20.747410   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:20.766468   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:20.866991   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:23:20.867057   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:23:20.897799   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:23:20.897858   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:23:20.925953   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:23:20.926026   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:23:20.954845   69358 provision.go:87] duration metric: took 1.011880735s to configureAuth
	I0919 22:23:20.954872   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:23:20.955074   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:20.955089   69358 machine.go:96] duration metric: took 1.508356629s to provisionDockerMachine
	I0919 22:23:20.955096   69358 client.go:171] duration metric: took 7.235738314s to LocalClient.Create
	I0919 22:23:20.955122   69358 start.go:167] duration metric: took 7.235806728s to libmachine.API.Create "ha-326307"
	I0919 22:23:20.955128   69358 start.go:293] postStartSetup for "ha-326307" (driver="docker")
	I0919 22:23:20.955136   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:23:20.955224   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:23:20.955259   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:20.975767   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.077921   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:23:21.081820   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:23:21.081872   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:23:21.081881   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:23:21.081888   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:23:21.081901   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:23:21.081973   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:23:21.082057   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:23:21.082071   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:23:21.082204   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:23:21.092245   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:21.123732   69358 start.go:296] duration metric: took 168.590139ms for postStartSetup
	I0919 22:23:21.124127   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:21.143109   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:21.143414   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:23:21.143466   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.162970   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.258062   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:23:21.263437   69358 start.go:128] duration metric: took 7.546444684s to createHost
	I0919 22:23:21.263491   69358 start.go:83] releasing machines lock for "ha-326307", held for 7.546570423s
	I0919 22:23:21.263561   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:23:21.282251   69358 ssh_runner.go:195] Run: cat /version.json
	I0919 22:23:21.282309   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.282391   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:23:21.282539   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:21.302076   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.302858   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:21.477003   69358 ssh_runner.go:195] Run: systemctl --version
	I0919 22:23:21.481946   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:23:21.486736   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:23:21.519470   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:23:21.519573   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:23:21.549703   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:23:21.549736   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:23:21.549772   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:23:21.549813   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:23:21.563897   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:23:21.577043   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:23:21.577104   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:23:21.591898   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:23:21.607905   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:23:21.677531   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:23:21.749223   69358 docker.go:234] disabling docker service ...
	I0919 22:23:21.749348   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:23:21.771648   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:23:21.786268   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:23:21.864247   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:23:21.930620   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:23:21.943680   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:23:21.963319   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:23:21.977473   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:23:21.989630   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:23:21.989705   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:23:22.001778   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:22.013415   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:23:22.024683   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:22.036042   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:23:22.047238   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:23:22.060239   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:23:22.074324   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:23:22.087081   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:23:22.099883   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:23:22.110348   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:22.180253   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:23:22.295748   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:23:22.295832   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:23:22.300535   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:23:22.300597   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:23:22.304676   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:23:22.344790   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:23:22.344850   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:22.371338   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:22.400934   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:23:22.402669   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:22.421952   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:23:22.426523   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:22.442415   69358 kubeadm.go:875] updating cluster {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:23:22.442712   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:22.442823   69358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:23:22.482684   69358 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:23:22.482710   69358 containerd.go:534] Images already preloaded, skipping extraction
	I0919 22:23:22.482762   69358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:23:22.518500   69358 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:23:22.518526   69358 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:23:22.518533   69358 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0919 22:23:22.518616   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:23:22.518668   69358 ssh_runner.go:195] Run: sudo crictl info
	I0919 22:23:22.554956   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:22.554993   69358 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:23:22.555004   69358 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:23:22.555029   69358 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-326307 NodeName:ha-326307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:23:22.555176   69358 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-326307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:23:22.555209   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:23:22.555273   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:23:22.568901   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:23:22.569038   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:23:22.569091   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:23:22.580223   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:23:22.580317   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:23:22.591268   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 22:23:22.612688   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:23:22.636770   69358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0919 22:23:22.658657   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:23:22.681384   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:23:22.685531   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:22.698340   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:22.769217   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:23:22.792280   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.2
	I0919 22:23:22.792300   69358 certs.go:194] generating shared ca certs ...
	I0919 22:23:22.792315   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.792509   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:23:22.792553   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:23:22.792563   69358 certs.go:256] generating profile certs ...
	I0919 22:23:22.792630   69358 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:23:22.792643   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt with IP's: []
	I0919 22:23:22.975725   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt ...
	I0919 22:23:22.975759   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt: {Name:mk32bca88dd6748516774b56251f96e4fc38a69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.975973   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key ...
	I0919 22:23:22.975990   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key: {Name:mkc0e836c004e527dbd2787dc00463a0715cf8a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:22.976108   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226
	I0919 22:23:22.976125   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:23:23.460427   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 ...
	I0919 22:23:23.460460   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226: {Name:mk98859e0e43a6d4b4da591dc89695908954cc81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.460672   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226 ...
	I0919 22:23:23.460693   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226: {Name:mk3473c1668aec72ec5a5598645b70e29415cdd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.460941   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.9685e226 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:23:23.461078   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.9685e226 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:23:23.461207   69358 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:23:23.461233   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt with IP's: []
	I0919 22:23:23.489621   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt ...
	I0919 22:23:23.489652   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt: {Name:mk06f3b4cfde33781bd7076ead00f94525257452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.489837   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key ...
	I0919 22:23:23.489860   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key: {Name:mk632a617a99ac85bf5a9b022d1173caf8e7b208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:23.489978   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:23:23.490003   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:23:23.490018   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:23:23.490034   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:23:23.490051   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:23:23.490069   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:23:23.490087   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:23:23.490100   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:23:23.490185   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:23:23.490228   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:23:23.490238   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:23:23.490273   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:23:23.490304   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:23:23.490333   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:23:23.490390   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:23.490435   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.490455   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.490497   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.491033   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:23:23.517815   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:23:23.544857   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:23:23.571386   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:23:23.600966   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:23:23.629855   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:23:23.657907   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:23:23.685564   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:23:23.713503   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:23:23.745344   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:23:23.774311   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:23:23.807603   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:23:23.832523   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:23:23.839649   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:23:23.851364   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.856325   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.856396   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:23:23.864469   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:23:23.876649   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:23:23.888129   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.892889   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.892949   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:23:23.901167   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:23:23.912487   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:23:23.924831   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.929289   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.929357   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:23.937110   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:23:23.948517   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:23:23.952948   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:23:23.953011   69358 kubeadm.go:392] StartCluster: {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:23.953080   69358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 22:23:23.953122   69358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:23:23.991138   69358 cri.go:89] found id: ""
	I0919 22:23:23.991247   69358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:23:24.003111   69358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:23:24.013643   69358 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:23:24.013714   69358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:23:24.024557   69358 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:23:24.024576   69358 kubeadm.go:157] found existing configuration files:
	
	I0919 22:23:24.024633   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:23:24.035252   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:23:24.035322   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:23:24.045590   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:23:24.056529   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:23:24.056590   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:23:24.066716   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:23:24.077570   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:23:24.077653   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:23:24.088177   69358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:23:24.098372   69358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:23:24.098426   69358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:23:24.108265   69358 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:23:24.149643   69358 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:23:24.149730   69358 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:23:24.166048   69358 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:23:24.166117   69358 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:23:24.166172   69358 kubeadm.go:310] OS: Linux
	I0919 22:23:24.166213   69358 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:23:24.166275   69358 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:23:24.166357   69358 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:23:24.166446   69358 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:23:24.166536   69358 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:23:24.166608   69358 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:23:24.166683   69358 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:23:24.166760   69358 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:23:24.230351   69358 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:23:24.230487   69358 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:23:24.230602   69358 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:23:24.238806   69358 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:23:24.243498   69358 out.go:252]   - Generating certificates and keys ...
	I0919 22:23:24.243610   69358 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:23:24.243715   69358 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:23:24.335199   69358 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:23:24.361175   69358 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:23:24.769077   69358 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:23:25.053293   69358 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:23:25.392067   69358 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:23:25.392251   69358 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-326307 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:23:25.629558   69358 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:23:25.629706   69358 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-326307 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:23:26.141828   69358 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:23:26.343650   69358 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:23:26.737207   69358 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:23:26.737292   69358 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:23:27.020543   69358 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:23:27.208963   69358 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:23:27.382044   69358 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:23:27.660395   69358 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:23:27.867964   69358 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:23:27.868475   69358 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:23:27.870857   69358 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:23:27.873408   69358 out.go:252]   - Booting up control plane ...
	I0919 22:23:27.873545   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:23:27.873665   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:23:27.873811   69358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:23:27.884709   69358 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:23:27.884874   69358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:23:27.892815   69358 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:23:27.893043   69358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:23:27.893108   69358 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:23:27.981591   69358 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:23:27.981772   69358 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:23:29.484085   69358 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501867716s
	I0919 22:23:29.488057   69358 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:23:29.488269   69358 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:23:29.488401   69358 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:23:29.488636   69358 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:23:31.058022   69358 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.569932465s
	I0919 22:23:31.762139   69358 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.27419796s
	I0919 22:23:33.991284   69358 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.503282233s
	I0919 22:23:34.005767   69358 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:23:34.017935   69358 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:23:34.032336   69358 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:23:34.032534   69358 kubeadm.go:310] [mark-control-plane] Marking the node ha-326307 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:23:34.042496   69358 kubeadm.go:310] [bootstrap-token] Using token: ym5hq4.pw1tvtip1io4ljbf
	I0919 22:23:34.044381   69358 out.go:252]   - Configuring RBAC rules ...
	I0919 22:23:34.044558   69358 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:23:34.048649   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:23:34.057509   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:23:34.061297   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:23:34.064926   69358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:23:34.069534   69358 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:23:34.399239   69358 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:23:34.818126   69358 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:23:35.398001   69358 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:23:35.398907   69358 kubeadm.go:310] 
	I0919 22:23:35.399007   69358 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:23:35.399035   69358 kubeadm.go:310] 
	I0919 22:23:35.399120   69358 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:23:35.399149   69358 kubeadm.go:310] 
	I0919 22:23:35.399207   69358 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:23:35.399301   69358 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:23:35.399350   69358 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:23:35.399356   69358 kubeadm.go:310] 
	I0919 22:23:35.399402   69358 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:23:35.399408   69358 kubeadm.go:310] 
	I0919 22:23:35.399470   69358 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:23:35.399481   69358 kubeadm.go:310] 
	I0919 22:23:35.399554   69358 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:23:35.399644   69358 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:23:35.399706   69358 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:23:35.399712   69358 kubeadm.go:310] 
	I0919 22:23:35.399803   69358 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:23:35.399888   69358 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:23:35.399892   69358 kubeadm.go:310] 
	I0919 22:23:35.399971   69358 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ym5hq4.pw1tvtip1io4ljbf \
	I0919 22:23:35.400068   69358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 \
	I0919 22:23:35.400089   69358 kubeadm.go:310] 	--control-plane 
	I0919 22:23:35.400093   69358 kubeadm.go:310] 
	I0919 22:23:35.400204   69358 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:23:35.400217   69358 kubeadm.go:310] 
	I0919 22:23:35.400285   69358 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ym5hq4.pw1tvtip1io4ljbf \
	I0919 22:23:35.400382   69358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 
	I0919 22:23:35.403119   69358 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:23:35.403274   69358 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:23:35.403305   69358 cni.go:84] Creating CNI manager for ""
	I0919 22:23:35.403317   69358 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:23:35.407302   69358 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:23:35.409983   69358 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:23:35.415011   69358 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:23:35.415039   69358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:23:35.436210   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:23:35.679694   69358 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:23:35.679756   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:35.679779   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307 minikube.k8s.io/updated_at=2025_09_19T22_23_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=true
	I0919 22:23:35.787076   69358 ops.go:34] apiserver oom_adj: -16
	I0919 22:23:35.787237   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:36.287327   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:36.787300   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:37.287415   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:37.788066   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:38.287401   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:38.787731   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.288028   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.788301   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:23:39.864456   69358 kubeadm.go:1105] duration metric: took 4.184765822s to wait for elevateKubeSystemPrivileges
	I0919 22:23:39.864500   69358 kubeadm.go:394] duration metric: took 15.911493151s to StartCluster
	I0919 22:23:39.864524   69358 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:39.864601   69358 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:23:39.865911   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:39.866255   69358 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:39.866275   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:23:39.866288   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:23:39.866297   69358 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:23:39.866377   69358 addons.go:69] Setting storage-provisioner=true in profile "ha-326307"
	I0919 22:23:39.866398   69358 addons.go:238] Setting addon storage-provisioner=true in "ha-326307"
	I0919 22:23:39.866400   69358 addons.go:69] Setting default-storageclass=true in profile "ha-326307"
	I0919 22:23:39.866428   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:39.866523   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:39.866434   69358 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-326307"
	I0919 22:23:39.866921   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.867012   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.892851   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:23:39.893863   69358 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:23:39.893944   69358 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:23:39.893953   69358 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:23:39.894002   69358 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:23:39.894061   69358 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:23:39.893888   69358 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:23:39.894642   69358 addons.go:238] Setting addon default-storageclass=true in "ha-326307"
	I0919 22:23:39.894691   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:39.895196   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:39.895724   69358 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:23:39.897293   69358 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:23:39.897315   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:23:39.897386   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:39.923915   69358 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:23:39.923939   69358 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:23:39.924001   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:39.926323   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:39.953300   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:39.968501   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:23:40.065441   69358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:23:40.083647   69358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:23:40.190461   69358 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:23:40.433561   69358 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:23:40.435567   69358 addons.go:514] duration metric: took 569.25898ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:23:40.435633   69358 start.go:246] waiting for cluster config update ...
	I0919 22:23:40.435651   69358 start.go:255] writing updated cluster config ...
	I0919 22:23:40.437510   69358 out.go:203] 
	I0919 22:23:40.439070   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:40.439141   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:40.441238   69358 out.go:179] * Starting "ha-326307-m02" control-plane node in "ha-326307" cluster
	I0919 22:23:40.443382   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:23:40.445749   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:23:40.447079   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:40.447132   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:23:40.447229   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:23:40.447308   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:23:40.447326   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:23:40.447427   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:40.470325   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:23:40.470347   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:23:40.470366   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:23:40.470391   69358 start.go:360] acquireMachinesLock for ha-326307-m02: {Name:mk4919a9b19250804b0f53d01bcd11efaf9a431f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:23:40.470518   69358 start.go:364] duration metric: took 88.309µs to acquireMachinesLock for "ha-326307-m02"
	I0919 22:23:40.470552   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:40.470618   69358 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:23:40.473495   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:23:40.473607   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:23:40.473631   69358 client.go:168] LocalClient.Create starting
	I0919 22:23:40.473689   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:23:40.473724   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:40.473734   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:40.473828   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:23:40.473853   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:23:40.473861   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:23:40.474095   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:40.493916   69358 network_create.go:77] Found existing network {name:ha-326307 subnet:0xc000ad7620 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:23:40.493972   69358 kic.go:121] calculated static IP "192.168.49.3" for the "ha-326307-m02" container
	I0919 22:23:40.494055   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:23:40.516112   69358 cli_runner.go:164] Run: docker volume create ha-326307-m02 --label name.minikube.sigs.k8s.io=ha-326307-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:23:40.537046   69358 oci.go:103] Successfully created a docker volume ha-326307-m02
	I0919 22:23:40.537137   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m02 --entrypoint /usr/bin/test -v ha-326307-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:23:40.991997   69358 oci.go:107] Successfully prepared a docker volume ha-326307-m02
	I0919 22:23:40.992038   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:23:40.992061   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:23:40.992121   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:23:45.362629   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.370467998s)
	I0919 22:23:45.362666   69358 kic.go:203] duration metric: took 4.370603938s to extract preloaded images to volume ...
	W0919 22:23:45.362777   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:23:45.362811   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:23:45.362846   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:23:45.417833   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307-m02 --name ha-326307-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307-m02 --network ha-326307 --ip 192.168.49.3 --volume ha-326307-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:23:45.744363   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Running}}
	I0919 22:23:45.768456   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:45.789293   69358 cli_runner.go:164] Run: docker exec ha-326307-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:23:45.846760   69358 oci.go:144] the created container "ha-326307-m02" has a running status.
	I0919 22:23:45.846794   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa...
	I0919 22:23:46.005236   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:23:46.005288   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:23:46.042640   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:46.067424   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:23:46.067455   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:23:46.132729   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:23:46.155854   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:23:46.155967   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.177181   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.177511   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.177533   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:23:46.320054   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:23:46.320089   69358 ubuntu.go:182] provisioning hostname "ha-326307-m02"
	I0919 22:23:46.320185   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.341740   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.341951   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.341965   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m02 && echo "ha-326307-m02" | sudo tee /etc/hostname
	I0919 22:23:46.497123   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:23:46.497234   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.520214   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:23:46.520436   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:23:46.520455   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:23:46.659417   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:23:46.659458   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:23:46.659492   69358 ubuntu.go:190] setting up certificates
	I0919 22:23:46.659505   69358 provision.go:84] configureAuth start
	I0919 22:23:46.659556   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:46.679498   69358 provision.go:143] copyHostCerts
	I0919 22:23:46.679551   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:46.679598   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:23:46.679605   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:23:46.679712   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:23:46.679851   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:46.679882   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:23:46.679893   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:23:46.679947   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:23:46.680043   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:46.680141   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:23:46.680185   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:23:46.680251   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:23:46.680367   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m02 san=[127.0.0.1 192.168.49.3 ha-326307-m02 localhost minikube]
	I0919 22:23:46.869190   69358 provision.go:177] copyRemoteCerts
	I0919 22:23:46.869251   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:23:46.869285   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:46.888798   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:46.988385   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:23:46.988452   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:23:47.018227   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:23:47.018299   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:23:47.046810   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:23:47.046866   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:23:47.074372   69358 provision.go:87] duration metric: took 414.855982ms to configureAuth
	I0919 22:23:47.074400   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:23:47.074581   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:47.074598   69358 machine.go:96] duration metric: took 918.712366ms to provisionDockerMachine
	I0919 22:23:47.074607   69358 client.go:171] duration metric: took 6.600969352s to LocalClient.Create
	I0919 22:23:47.074631   69358 start.go:167] duration metric: took 6.601023702s to libmachine.API.Create "ha-326307"
	I0919 22:23:47.074642   69358 start.go:293] postStartSetup for "ha-326307-m02" (driver="docker")
	I0919 22:23:47.074650   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:23:47.074721   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:23:47.074767   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.094538   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.195213   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:23:47.199088   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:23:47.199139   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:23:47.199181   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:23:47.199191   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:23:47.199215   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:23:47.199276   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:23:47.199378   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:23:47.199394   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:23:47.199502   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:23:47.209642   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:47.240945   69358 start.go:296] duration metric: took 166.288086ms for postStartSetup
	I0919 22:23:47.241383   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:47.261061   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:23:47.261460   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:23:47.261513   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.280359   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.374609   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:23:47.379255   69358 start.go:128] duration metric: took 6.908623332s to createHost
	I0919 22:23:47.379283   69358 start.go:83] releasing machines lock for "ha-326307-m02", held for 6.908753842s
	I0919 22:23:47.379346   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:23:47.400418   69358 out.go:179] * Found network options:
	I0919 22:23:47.401854   69358 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:23:47.403072   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:23:47.403133   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:23:47.403263   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:23:47.403266   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:23:47.403326   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.403332   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:23:47.423928   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.424218   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:23:47.597529   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:23:47.630263   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:23:47.630334   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:23:47.661706   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:23:47.661733   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:23:47.661772   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:23:47.661826   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:23:47.675485   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:23:47.687726   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:23:47.687780   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:23:47.701818   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:23:47.717912   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:23:47.789825   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:23:47.863188   69358 docker.go:234] disabling docker service ...
	I0919 22:23:47.863267   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:23:47.881757   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:23:47.893830   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:23:47.963004   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:23:48.034120   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:23:48.046843   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:23:48.065279   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:23:48.078269   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:23:48.089105   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:23:48.089186   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:23:48.099867   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:48.111076   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:23:48.122049   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:23:48.132648   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:23:48.142263   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:23:48.152876   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:23:48.163459   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:23:48.174096   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:23:48.183483   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:23:48.192780   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:48.261004   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:23:48.364434   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:23:48.364508   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:23:48.368726   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:23:48.368792   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:23:48.372683   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:23:48.409110   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:23:48.409200   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:48.433389   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:23:48.460529   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:23:48.462207   69358 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:23:48.464087   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:23:48.482217   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:23:48.486620   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:48.498806   69358 mustload.go:65] Loading cluster: ha-326307
	I0919 22:23:48.499032   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:23:48.499315   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:23:48.518576   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:48.518850   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.3
	I0919 22:23:48.518866   69358 certs.go:194] generating shared ca certs ...
	I0919 22:23:48.518885   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.519012   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:23:48.519082   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:23:48.519096   69358 certs.go:256] generating profile certs ...
	I0919 22:23:48.519222   69358 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:23:48.519259   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4
	I0919 22:23:48.519288   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:23:48.963393   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 ...
	I0919 22:23:48.963428   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4: {Name:mk381f64cc0991e3a6417e9586b9565eb7a8dbf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.963635   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4 ...
	I0919 22:23:48.963660   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4: {Name:mk4dbead0b9c36c7a3635520729a1eb2d4b33f13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:23:48.963762   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.3b537cd4 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:23:48.963935   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:23:48.964103   69358 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:23:48.964120   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:23:48.964138   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:23:48.964166   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:23:48.964183   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:23:48.964200   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:23:48.964218   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:23:48.964234   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:23:48.964251   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:23:48.964313   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:23:48.964355   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:23:48.964366   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:23:48.964406   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:23:48.964438   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:23:48.964471   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:23:48.964528   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:23:48.964570   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:23:48.964592   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:23:48.964612   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:48.964731   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:48.983907   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:49.073692   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:23:49.078819   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:23:49.094234   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:23:49.099593   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:23:49.113663   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:23:49.117744   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:23:49.133048   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:23:49.136861   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:23:49.150734   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:23:49.154901   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:23:49.169388   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:23:49.173566   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:23:49.188070   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:23:49.215594   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:23:49.243561   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:23:49.271624   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:23:49.301814   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:23:49.332556   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:23:49.360723   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:23:49.388872   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:23:49.417316   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:23:49.448722   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:23:49.476877   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:23:49.504914   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:23:49.524969   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:23:49.544942   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:23:49.564506   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:23:49.584887   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:23:49.605725   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:23:49.625552   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:23:49.645811   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:23:49.652062   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:23:49.664544   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.668823   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.668889   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:23:49.676892   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:23:49.688737   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:23:49.699741   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.703762   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.703823   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:23:49.711311   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:23:49.721987   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:23:49.732874   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.737289   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.737351   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:23:49.745312   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:23:49.756384   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:23:49.760242   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:23:49.760315   69358 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0919 22:23:49.760415   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:23:49.760438   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:23:49.760476   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:23:49.773427   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:23:49.773499   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:23:49.773549   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:23:49.784237   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:23:49.784306   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:23:49.794534   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:23:49.814529   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:23:49.837846   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:23:49.859421   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:23:49.863859   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:23:49.876721   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:23:49.948089   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:23:49.971010   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:23:49.971327   69358 start.go:317] joinCluster: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:49.971508   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:23:49.971618   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:23:49.992535   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:23:50.137695   69358 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:23:50.137740   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kb90tj.om7zof6htice1y8z --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:24:08.633363   69358 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kb90tj.om7zof6htice1y8z --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.495537277s)
	I0919 22:24:08.633404   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:24:08.849981   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307-m02 minikube.k8s.io/updated_at=2025_09_19T22_24_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=false
	I0919 22:24:08.928109   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-326307-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:24:09.011507   69358 start.go:319] duration metric: took 19.040175049s to joinCluster
	I0919 22:24:09.011590   69358 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:09.011816   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:09.013756   69358 out.go:179] * Verifying Kubernetes components...
	I0919 22:24:09.015232   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:09.115618   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:09.130578   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:24:09.130645   69358 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:24:09.130869   69358 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m02" to be "Ready" ...
	W0919 22:24:11.134373   69358 node_ready.go:57] node "ha-326307-m02" has "Ready":"False" status (will retry)
	I0919 22:24:11.634655   69358 node_ready.go:49] node "ha-326307-m02" is "Ready"
	I0919 22:24:11.634683   69358 node_ready.go:38] duration metric: took 2.503796185s for node "ha-326307-m02" to be "Ready" ...
	I0919 22:24:11.634697   69358 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:24:11.634751   69358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:24:11.647782   69358 api_server.go:72] duration metric: took 2.636155477s to wait for apiserver process to appear ...
	I0919 22:24:11.647812   69358 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:24:11.647848   69358 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:24:11.652005   69358 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:24:11.652952   69358 api_server.go:141] control plane version: v1.34.0
	I0919 22:24:11.652975   69358 api_server.go:131] duration metric: took 5.15649ms to wait for apiserver health ...
	I0919 22:24:11.652984   69358 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:24:11.657535   69358 system_pods.go:59] 17 kube-system pods found
	I0919 22:24:11.657569   69358 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:11.657577   69358 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:11.657581   69358 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:11.657586   69358 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Pending
	I0919 22:24:11.657591   69358 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:11.657598   69358 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Pending: PodScheduled:SchedulerError (pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed)
	I0919 22:24:11.657604   69358 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:11.657609   69358 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Pending
	I0919 22:24:11.657616   69358 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:11.657621   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Pending
	I0919 22:24:11.657626   69358 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:11.657636   69358 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q8mtj": pod kube-proxy-q8mtj is already assigned to node "ha-326307-m02")
	I0919 22:24:11.657642   69358 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:11.657649   69358 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Pending
	I0919 22:24:11.657654   69358 system_pods.go:61] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:11.657660   69358 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Pending
	I0919 22:24:11.657665   69358 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:11.657673   69358 system_pods.go:74] duration metric: took 4.68298ms to wait for pod list to return data ...
	I0919 22:24:11.657687   69358 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:24:11.660430   69358 default_sa.go:45] found service account: "default"
	I0919 22:24:11.660456   69358 default_sa.go:55] duration metric: took 2.762581ms for default service account to be created ...
	I0919 22:24:11.660467   69358 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:24:11.664515   69358 system_pods.go:86] 17 kube-system pods found
	I0919 22:24:11.664549   69358 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:11.664557   69358 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:11.664563   69358 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:11.664567   69358 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Pending
	I0919 22:24:11.664574   69358 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:11.664583   69358 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Pending: PodScheduled:SchedulerError (pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed)
	I0919 22:24:11.664590   69358 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:11.664594   69358 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Pending
	I0919 22:24:11.664600   69358 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:11.664606   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Pending
	I0919 22:24:11.664615   69358 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:11.664623   69358 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q8mtj": pod kube-proxy-q8mtj is already assigned to node "ha-326307-m02")
	I0919 22:24:11.664629   69358 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:11.664637   69358 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Pending
	I0919 22:24:11.664643   69358 system_pods.go:89] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:11.664649   69358 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Pending
	I0919 22:24:11.664653   69358 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:11.664663   69358 system_pods.go:126] duration metric: took 4.189005ms to wait for k8s-apps to be running ...
	I0919 22:24:11.664676   69358 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:24:11.664734   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:24:11.677679   69358 system_svc.go:56] duration metric: took 12.991783ms WaitForService to wait for kubelet
	I0919 22:24:11.677718   69358 kubeadm.go:578] duration metric: took 2.666095008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:11.677741   69358 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:24:11.681219   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:11.681249   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:11.681276   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:11.681282   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:11.681288   69358 node_conditions.go:105] duration metric: took 3.540774ms to run NodePressure ...
	I0919 22:24:11.681302   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:24:11.681336   69358 start.go:255] writing updated cluster config ...
	I0919 22:24:11.683465   69358 out.go:203] 
	I0919 22:24:11.685336   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:11.685480   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:11.687190   69358 out.go:179] * Starting "ha-326307-m03" control-plane node in "ha-326307" cluster
	I0919 22:24:11.688774   69358 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:24:11.690230   69358 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:11.691529   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:24:11.691564   69358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:11.691570   69358 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:11.691776   69358 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:11.691792   69358 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:24:11.691940   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:11.714494   69358 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:11.714516   69358 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:11.714538   69358 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:11.714564   69358 start.go:360] acquireMachinesLock for ha-326307-m03: {Name:mk07818636650a6efffb19d787e84a34d6f1dd98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:11.714717   69358 start.go:364] duration metric: took 129.412µs to acquireMachinesLock for "ha-326307-m03"
	I0919 22:24:11.714749   69358 start.go:93] Provisioning new machine with config: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:11.714883   69358 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:24:11.717146   69358 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:11.717288   69358 start.go:159] libmachine.API.Create for "ha-326307" (driver="docker")
	I0919 22:24:11.717325   69358 client.go:168] LocalClient.Create starting
	I0919 22:24:11.717396   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 22:24:11.717429   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:11.717444   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:11.717499   69358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 22:24:11.717523   69358 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:11.717531   69358 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:11.717757   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:11.736709   69358 network_create.go:77] Found existing network {name:ha-326307 subnet:0xc001c6a9f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:11.736749   69358 kic.go:121] calculated static IP "192.168.49.4" for the "ha-326307-m03" container
	I0919 22:24:11.736838   69358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:11.757855   69358 cli_runner.go:164] Run: docker volume create ha-326307-m03 --label name.minikube.sigs.k8s.io=ha-326307-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:11.780198   69358 oci.go:103] Successfully created a docker volume ha-326307-m03
	I0919 22:24:11.780287   69358 cli_runner.go:164] Run: docker run --rm --name ha-326307-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m03 --entrypoint /usr/bin/test -v ha-326307-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:12.269719   69358 oci.go:107] Successfully prepared a docker volume ha-326307-m03
	I0919 22:24:12.269772   69358 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:24:12.269795   69358 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:12.269864   69358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:16.658999   69358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-326307-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.389088771s)
	I0919 22:24:16.659030   69358 kic.go:203] duration metric: took 4.389232064s to extract preloaded images to volume ...
	W0919 22:24:16.659114   69358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:16.659151   69358 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:16.659211   69358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:16.714324   69358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-326307-m03 --name ha-326307-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-326307-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-326307-m03 --network ha-326307 --ip 192.168.49.4 --volume ha-326307-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:17.029039   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Running}}
	I0919 22:24:17.050534   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.070017   69358 cli_runner.go:164] Run: docker exec ha-326307-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:17.125252   69358 oci.go:144] the created container "ha-326307-m03" has a running status.
	I0919 22:24:17.125293   69358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa...
	I0919 22:24:17.618351   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:17.618395   69358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:17.646956   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.667176   69358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:17.667203   69358 kic_runner.go:114] Args: [docker exec --privileged ha-326307-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:17.713667   69358 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:24:17.734276   69358 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:17.734370   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:17.755726   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:17.755941   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:17.755953   69358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:17.894482   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:24:17.894512   69358 ubuntu.go:182] provisioning hostname "ha-326307-m03"
	I0919 22:24:17.894572   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:17.914204   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:17.914507   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:17.914530   69358 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m03 && echo "ha-326307-m03" | sudo tee /etc/hostname
	I0919 22:24:18.068724   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:24:18.068805   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.088244   69358 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:18.088504   69358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0919 22:24:18.088525   69358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:18.227353   69358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:18.227390   69358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:24:18.227421   69358 ubuntu.go:190] setting up certificates
	I0919 22:24:18.227433   69358 provision.go:84] configureAuth start
	I0919 22:24:18.227496   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.247948   69358 provision.go:143] copyHostCerts
	I0919 22:24:18.247989   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:24:18.248023   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:24:18.248029   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:24:18.248096   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:24:18.248231   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:24:18.248289   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:24:18.248299   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:24:18.248338   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:24:18.248404   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:24:18.248423   69358 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:24:18.248427   69358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:24:18.248457   69358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:24:18.248512   69358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m03 san=[127.0.0.1 192.168.49.4 ha-326307-m03 localhost minikube]
	I0919 22:24:18.393257   69358 provision.go:177] copyRemoteCerts
	I0919 22:24:18.393319   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:18.393353   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.412748   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.514005   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:18.514092   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:24:18.542657   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:18.542733   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:18.569691   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:18.569759   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:18.596329   69358 provision.go:87] duration metric: took 368.876183ms to configureAuth
	I0919 22:24:18.596357   69358 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:18.596551   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:18.596562   69358 machine.go:96] duration metric: took 862.263986ms to provisionDockerMachine
	I0919 22:24:18.596567   69358 client.go:171] duration metric: took 6.879237415s to LocalClient.Create
	I0919 22:24:18.596586   69358 start.go:167] duration metric: took 6.879300568s to libmachine.API.Create "ha-326307"
	I0919 22:24:18.596594   69358 start.go:293] postStartSetup for "ha-326307-m03" (driver="docker")
	I0919 22:24:18.596602   69358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:18.596644   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:18.596677   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.615349   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.717907   69358 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:18.722093   69358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:18.722137   69358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:18.722150   69358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:18.722173   69358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:18.722186   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:24:18.722248   69358 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:24:18.722356   69358 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:24:18.722372   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:24:18.722580   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:18.732899   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:24:18.766453   69358 start.go:296] duration metric: took 169.843532ms for postStartSetup
	I0919 22:24:18.766899   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.786322   69358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:24:18.786775   69358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:18.786833   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.806377   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.901798   69358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:18.907121   69358 start.go:128] duration metric: took 7.192223106s to createHost
	I0919 22:24:18.907180   69358 start.go:83] releasing machines lock for "ha-326307-m03", held for 7.192445142s
	I0919 22:24:18.907266   69358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:24:18.929545   69358 out.go:179] * Found network options:
	I0919 22:24:18.931020   69358 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:24:18.932299   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932334   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932375   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:18.932396   69358 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:18.932501   69358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:18.932558   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.932588   69358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:18.932662   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:24:18.952990   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:18.953400   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:24:19.131622   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:19.165991   69358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:19.166079   69358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:19.197850   69358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:19.197878   69358 start.go:495] detecting cgroup driver to use...
	I0919 22:24:19.197909   69358 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:19.197960   69358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:24:19.211538   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:19.223959   69358 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:24:19.224009   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:24:19.239088   69358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:24:19.254102   69358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:24:19.328965   69358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:24:19.406808   69358 docker.go:234] disabling docker service ...
	I0919 22:24:19.406888   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:24:19.425948   69358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:24:19.438801   69358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:24:19.510941   69358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:24:19.581470   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:19.594683   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:19.613666   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:19.627192   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:19.638603   69358 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:19.638668   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:19.649965   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:19.661530   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:19.673111   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:19.684782   69358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:19.696056   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:19.707630   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:19.719687   69358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:19.731477   69358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:19.741738   69358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:19.751963   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:19.822277   69358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:19.931918   69358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:24:19.931995   69358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:24:19.936531   69358 start.go:563] Will wait 60s for crictl version
	I0919 22:24:19.936591   69358 ssh_runner.go:195] Run: which crictl
	I0919 22:24:19.940632   69358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:19.977944   69358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:24:19.978013   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:24:20.003290   69358 ssh_runner.go:195] Run: containerd --version
	I0919 22:24:20.032714   69358 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:24:20.034190   69358 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:20.035560   69358 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:24:20.036915   69358 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:20.055444   69358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:20.059762   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:20.072851   69358 mustload.go:65] Loading cluster: ha-326307
	I0919 22:24:20.073081   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:20.073298   69358 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:24:20.091365   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:24:20.091605   69358 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.4
	I0919 22:24:20.091616   69358 certs.go:194] generating shared ca certs ...
	I0919 22:24:20.091629   69358 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.091746   69358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:24:20.091786   69358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:24:20.091796   69358 certs.go:256] generating profile certs ...
	I0919 22:24:20.091865   69358 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:24:20.091891   69358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604
	I0919 22:24:20.091905   69358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:24:20.372898   69358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 ...
	I0919 22:24:20.372943   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604: {Name:mk9b724916886d4c69140cc45e23ce082460d116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.373186   69358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604 ...
	I0919 22:24:20.373210   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604: {Name:mkfc0cd42f96faa2f697a81fc7ca671182c3cea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:20.373311   69358 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.95aca604 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:24:20.373471   69358 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:24:20.373649   69358 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:24:20.373668   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:20.373682   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:20.373692   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:20.373703   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:20.373713   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:20.373723   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:20.373733   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:20.373743   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:20.373795   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:24:20.373823   69358 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:20.373832   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:24:20.373856   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:24:20.373878   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:20.373899   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:20.373936   69358 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:24:20.373962   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:24:20.373976   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:20.373987   69358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:24:20.374034   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:24:20.394051   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:24:20.484593   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:20.489010   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:20.503471   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:20.507649   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:24:20.522195   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:20.526410   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:20.541840   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:20.546043   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:24:20.560364   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:20.564230   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:20.577547   69358 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:20.581387   69358 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:20.594800   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:20.622991   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:20.651461   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:20.678113   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:20.705292   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:24:20.732489   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:20.762310   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:20.789808   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:20.819251   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:24:20.851010   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:20.879714   69358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:24:20.908177   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:20.928644   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:24:20.949340   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:20.969391   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:24:20.989837   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:21.011118   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:21.031485   69358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:21.052354   69358 ssh_runner.go:195] Run: openssl version
	I0919 22:24:21.058486   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:24:21.069582   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.074372   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.074440   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:24:21.082186   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:21.092957   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:24:21.104085   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.108193   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.108258   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:24:21.116078   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:21.127607   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:21.139338   69358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.143794   69358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.143848   69358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:21.151321   69358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:21.162759   69358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:21.166499   69358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:21.166555   69358 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0919 22:24:21.166642   69358 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:21.166677   69358 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:21.166738   69358 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:21.180123   69358 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:21.180202   69358 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:21.180261   69358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:21.189900   69358 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:21.189963   69358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:21.200336   69358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:24:21.220715   69358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:21.244525   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:21.268789   69358 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:21.272885   69358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:21.285764   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:21.362911   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:21.394403   69358 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:24:21.394691   69358 start.go:317] joinCluster: &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.394850   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:21.394898   69358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:24:21.419020   69358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:24:21.569927   69358 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:21.569980   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uhifqr.okdtfjqzhuoxbb2e --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:24:32.089764   69358 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uhifqr.okdtfjqzhuoxbb2e --discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-326307-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.519762438s)
	I0919 22:24:32.089793   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:24:32.309566   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326307-m03 minikube.k8s.io/updated_at=2025_09_19T22_24_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-326307 minikube.k8s.io/primary=false
	I0919 22:24:32.391142   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-326307-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:24:32.471336   69358 start.go:319] duration metric: took 11.076641052s to joinCluster
	I0919 22:24:32.471402   69358 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:24:32.471770   69358 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:24:32.473461   69358 out.go:179] * Verifying Kubernetes components...
	I0919 22:24:32.475427   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:32.579664   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:32.593786   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:24:32.593856   69358 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:24:32.594084   69358 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m03" to be "Ready" ...
	W0919 22:24:34.597297   69358 node_ready.go:57] node "ha-326307-m03" has "Ready":"False" status (will retry)
	I0919 22:24:35.098269   69358 node_ready.go:49] node "ha-326307-m03" is "Ready"
	I0919 22:24:35.098296   69358 node_ready.go:38] duration metric: took 2.504196997s for node "ha-326307-m03" to be "Ready" ...
	I0919 22:24:35.098310   69358 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:24:35.098358   69358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:24:35.111440   69358 api_server.go:72] duration metric: took 2.640014462s to wait for apiserver process to appear ...
	I0919 22:24:35.111465   69358 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:24:35.111483   69358 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:24:35.115724   69358 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:24:35.116810   69358 api_server.go:141] control plane version: v1.34.0
	I0919 22:24:35.116837   69358 api_server.go:131] duration metric: took 5.364462ms to wait for apiserver health ...
	I0919 22:24:35.116849   69358 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:24:35.123343   69358 system_pods.go:59] 27 kube-system pods found
	I0919 22:24:35.123372   69358 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:35.123377   69358 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:35.123380   69358 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:35.123384   69358 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:24:35.123387   69358 system_pods.go:61] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Pending
	I0919 22:24:35.123390   69358 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:35.123393   69358 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:24:35.123400   69358 system_pods.go:61] "kindnet-pnj9r" [14a458fc-0e9d-42e9-9473-f7f2a6f7f571] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-pnj9r": pod kindnet-pnj9r is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123408   69358 system_pods.go:61] "kindnet-qxwpq" [173e48ec-ef56-4824-9f55-a04b199b7943] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qxwpq": pod kindnet-qxwpq is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123416   69358 system_pods.go:61] "kindnet-wcct9" [5472dcae-344b-43fb-84d1-8d0d41852cd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wcct9": pod kindnet-wcct9 is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123427   69358 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:35.123433   69358 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:24:35.123445   69358 system_pods.go:61] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Pending
	I0919 22:24:35.123450   69358 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:35.123454   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running
	I0919 22:24:35.123457   69358 system_pods.go:61] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Pending
	I0919 22:24:35.123461   69358 system_pods.go:61] "kube-proxy-6nmjx" [81414747-6c4e-495e-a28d-cb17f0c0c306] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-6nmjx": pod kube-proxy-6nmjx is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123465   69358 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:35.123469   69358 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:24:35.123472   69358 system_pods.go:61] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ws89d": pod kube-proxy-ws89d is already assigned to node "ha-326307-m03")
	I0919 22:24:35.123477   69358 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:35.123481   69358 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running
	I0919 22:24:35.123487   69358 system_pods.go:61] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Pending
	I0919 22:24:35.123489   69358 system_pods.go:61] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:35.123492   69358 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:24:35.123496   69358 system_pods.go:61] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Pending
	I0919 22:24:35.123503   69358 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:35.123511   69358 system_pods.go:74] duration metric: took 6.65469ms to wait for pod list to return data ...
	I0919 22:24:35.123525   69358 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:24:35.126592   69358 default_sa.go:45] found service account: "default"
	I0919 22:24:35.126616   69358 default_sa.go:55] duration metric: took 3.083846ms for default service account to be created ...
	I0919 22:24:35.126627   69358 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:24:35.131895   69358 system_pods.go:86] 27 kube-system pods found
	I0919 22:24:35.131928   69358 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running
	I0919 22:24:35.131936   69358 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running
	I0919 22:24:35.131941   69358 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:24:35.131946   69358 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:24:35.131950   69358 system_pods.go:89] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Pending
	I0919 22:24:35.131954   69358 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:24:35.131959   69358 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:24:35.131968   69358 system_pods.go:89] "kindnet-pnj9r" [14a458fc-0e9d-42e9-9473-f7f2a6f7f571] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-pnj9r": pod kindnet-pnj9r is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131975   69358 system_pods.go:89] "kindnet-qxwpq" [173e48ec-ef56-4824-9f55-a04b199b7943] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qxwpq": pod kindnet-qxwpq is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131986   69358 system_pods.go:89] "kindnet-wcct9" [5472dcae-344b-43fb-84d1-8d0d41852cd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-wcct9": pod kindnet-wcct9 is already assigned to node "ha-326307-m03")
	I0919 22:24:35.131993   69358 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:24:35.132003   69358 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:24:35.132009   69358 system_pods.go:89] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Pending
	I0919 22:24:35.132015   69358 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:24:35.132022   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running
	I0919 22:24:35.132028   69358 system_pods.go:89] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Pending
	I0919 22:24:35.132035   69358 system_pods.go:89] "kube-proxy-6nmjx" [81414747-6c4e-495e-a28d-cb17f0c0c306] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-6nmjx": pod kube-proxy-6nmjx is already assigned to node "ha-326307-m03")
	I0919 22:24:35.132044   69358 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:24:35.132050   69358 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:24:35.132057   69358 system_pods.go:89] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ws89d": pod kube-proxy-ws89d is already assigned to node "ha-326307-m03")
	I0919 22:24:35.132067   69358 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:24:35.132076   69358 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running
	I0919 22:24:35.132082   69358 system_pods.go:89] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Pending
	I0919 22:24:35.132090   69358 system_pods.go:89] "kube-vip-ha-326307" [36baecf0-60bd-41c0-a3c8-45e4f6ebddad] Running
	I0919 22:24:35.132096   69358 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:24:35.132101   69358 system_pods.go:89] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Pending
	I0919 22:24:35.132107   69358 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:24:35.132117   69358 system_pods.go:126] duration metric: took 5.483041ms to wait for k8s-apps to be running ...
	I0919 22:24:35.132130   69358 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:24:35.132201   69358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:24:35.145901   69358 system_svc.go:56] duration metric: took 13.762213ms WaitForService to wait for kubelet
	I0919 22:24:35.145934   69358 kubeadm.go:578] duration metric: took 2.67451015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:35.145953   69358 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:24:35.149091   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149114   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149122   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149126   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149129   69358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:24:35.149133   69358 node_conditions.go:123] node cpu capacity is 8
	I0919 22:24:35.149137   69358 node_conditions.go:105] duration metric: took 3.180117ms to run NodePressure ...
	I0919 22:24:35.149147   69358 start.go:241] waiting for startup goroutines ...
	I0919 22:24:35.149187   69358 start.go:255] writing updated cluster config ...
	I0919 22:24:35.149520   69358 ssh_runner.go:195] Run: rm -f paused
	I0919 22:24:35.153920   69358 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:24:35.154452   69358 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:35.158459   69358 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9j5pw" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.164361   69358 pod_ready.go:94] pod "coredns-66bc5c9577-9j5pw" is "Ready"
	I0919 22:24:35.164388   69358 pod_ready.go:86] duration metric: took 5.90604ms for pod "coredns-66bc5c9577-9j5pw" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.164396   69358 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wqvzd" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.170275   69358 pod_ready.go:94] pod "coredns-66bc5c9577-wqvzd" is "Ready"
	I0919 22:24:35.170305   69358 pod_ready.go:86] duration metric: took 5.903438ms for pod "coredns-66bc5c9577-wqvzd" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.221651   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.227692   69358 pod_ready.go:94] pod "etcd-ha-326307" is "Ready"
	I0919 22:24:35.227721   69358 pod_ready.go:86] duration metric: took 6.035355ms for pod "etcd-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.227738   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.234705   69358 pod_ready.go:94] pod "etcd-ha-326307-m02" is "Ready"
	I0919 22:24:35.234755   69358 pod_ready.go:86] duration metric: took 6.991962ms for pod "etcd-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.234769   69358 pod_ready.go:83] waiting for pod "etcd-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:35.355285   69358 request.go:683] "Waited before sending request" delay="120.371513ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326307-m03"
	I0919 22:24:35.555444   69358 request.go:683] "Waited before sending request" delay="196.344855ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:35.955374   69358 request.go:683] "Waited before sending request" delay="196.276117ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:35.958866   69358 pod_ready.go:94] pod "etcd-ha-326307-m03" is "Ready"
	I0919 22:24:35.958897   69358 pod_ready.go:86] duration metric: took 724.121102ms for pod "etcd-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.155371   69358 request.go:683] "Waited before sending request" delay="196.353052ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:24:36.158952   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.355354   69358 request.go:683] "Waited before sending request" delay="196.272183ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307"
	I0919 22:24:36.555231   69358 request.go:683] "Waited before sending request" delay="196.389456ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307"
	I0919 22:24:36.558900   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307" is "Ready"
	I0919 22:24:36.558927   69358 pod_ready.go:86] duration metric: took 399.940435ms for pod "kube-apiserver-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.558936   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.755357   69358 request.go:683] "Waited before sending request" delay="196.333509ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307-m02"
	I0919 22:24:36.955622   69358 request.go:683] "Waited before sending request" delay="196.371107ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:36.958850   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307-m02" is "Ready"
	I0919 22:24:36.958881   69358 pod_ready.go:86] duration metric: took 399.937855ms for pod "kube-apiserver-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:36.958892   69358 pod_ready.go:83] waiting for pod "kube-apiserver-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.155391   69358 request.go:683] "Waited before sending request" delay="196.40338ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326307-m03"
	I0919 22:24:37.355336   69358 request.go:683] "Waited before sending request" delay="196.255836ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:37.358527   69358 pod_ready.go:94] pod "kube-apiserver-ha-326307-m03" is "Ready"
	I0919 22:24:37.358558   69358 pod_ready.go:86] duration metric: took 399.659411ms for pod "kube-apiserver-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.555013   69358 request.go:683] "Waited before sending request" delay="196.298446ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0919 22:24:37.559362   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.755832   69358 request.go:683] "Waited before sending request" delay="196.350309ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307"
	I0919 22:24:37.954837   69358 request.go:683] "Waited before sending request" delay="195.286624ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307"
	I0919 22:24:37.958236   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307" is "Ready"
	I0919 22:24:37.958266   69358 pod_ready.go:86] duration metric: took 398.878465ms for pod "kube-controller-manager-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:37.958274   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.155758   69358 request.go:683] "Waited before sending request" delay="197.394867ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m02"
	I0919 22:24:38.355929   69358 request.go:683] "Waited before sending request" delay="196.396129ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:38.359268   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307-m02" is "Ready"
	I0919 22:24:38.359292   69358 pod_ready.go:86] duration metric: took 401.013168ms for pod "kube-controller-manager-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.359301   69358 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:38.555606   69358 request.go:683] "Waited before sending request" delay="196.234039ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m03"
	I0919 22:24:38.755574   69358 request.go:683] "Waited before sending request" delay="196.387697ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:38.955366   69358 request.go:683] "Waited before sending request" delay="95.227976ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326307-m03"
	I0919 22:24:39.154881   69358 request.go:683] "Waited before sending request" delay="196.301821ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:39.555649   69358 request.go:683] "Waited before sending request" delay="192.377634ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:39.955251   69358 request.go:683] "Waited before sending request" delay="92.286577ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	W0919 22:24:40.366591   69358 pod_ready.go:104] pod "kube-controller-manager-ha-326307-m03" is not "Ready", error: <nil>
	W0919 22:24:42.367386   69358 pod_ready.go:104] pod "kube-controller-manager-ha-326307-m03" is not "Ready", error: <nil>
	I0919 22:24:43.367824   69358 pod_ready.go:94] pod "kube-controller-manager-ha-326307-m03" is "Ready"
	I0919 22:24:43.367860   69358 pod_ready.go:86] duration metric: took 5.00855284s for pod "kube-controller-manager-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.371145   69358 pod_ready.go:83] waiting for pod "kube-proxy-8kxtv" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.376946   69358 pod_ready.go:94] pod "kube-proxy-8kxtv" is "Ready"
	I0919 22:24:43.376975   69358 pod_ready.go:86] duration metric: took 5.786362ms for pod "kube-proxy-8kxtv" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.376985   69358 pod_ready.go:83] waiting for pod "kube-proxy-q8mtj" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.555396   69358 request.go:683] "Waited before sending request" delay="178.323112ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8mtj"
	I0919 22:24:43.755331   69358 request.go:683] "Waited before sending request" delay="196.35612ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m02"
	I0919 22:24:43.758666   69358 pod_ready.go:94] pod "kube-proxy-q8mtj" is "Ready"
	I0919 22:24:43.758695   69358 pod_ready.go:86] duration metric: took 381.70368ms for pod "kube-proxy-q8mtj" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.758704   69358 pod_ready.go:83] waiting for pod "kube-proxy-ws89d" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:24:43.955265   69358 request.go:683] "Waited before sending request" delay="196.399278ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ws89d"
	I0919 22:24:44.155007   69358 request.go:683] "Waited before sending request" delay="196.303687ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:44.354881   69358 request.go:683] "Waited before sending request" delay="95.2124ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ws89d"
	I0919 22:24:44.555609   69358 request.go:683] "Waited before sending request" delay="197.246504ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:44.955613   69358 request.go:683] "Waited before sending request" delay="192.471154ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	I0919 22:24:45.355390   69358 request.go:683] "Waited before sending request" delay="92.281537ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-326307-m03"
	W0919 22:24:45.765195   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:48.265294   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:50.765471   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:53.265410   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:55.265474   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:57.765267   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:24:59.765483   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:02.266617   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:04.766256   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:07.265177   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:09.265694   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:11.765032   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:13.765313   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	W0919 22:25:15.766278   69358 pod_ready.go:104] pod "kube-proxy-ws89d" is not "Ready", error: <nil>
	I0919 22:25:17.764644   69358 pod_ready.go:94] pod "kube-proxy-ws89d" is "Ready"
	I0919 22:25:17.764670   69358 pod_ready.go:86] duration metric: took 34.005951783s for pod "kube-proxy-ws89d" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.767738   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.772985   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307" is "Ready"
	I0919 22:25:17.773015   69358 pod_ready.go:86] duration metric: took 5.246042ms for pod "kube-scheduler-ha-326307" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.773023   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.778916   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307-m02" is "Ready"
	I0919 22:25:17.778942   69358 pod_ready.go:86] duration metric: took 5.914033ms for pod "kube-scheduler-ha-326307-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.778951   69358 pod_ready.go:83] waiting for pod "kube-scheduler-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.784122   69358 pod_ready.go:94] pod "kube-scheduler-ha-326307-m03" is "Ready"
	I0919 22:25:17.784165   69358 pod_ready.go:86] duration metric: took 5.193982ms for pod "kube-scheduler-ha-326307-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:17.784183   69358 pod_ready.go:40] duration metric: took 42.630226972s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:17.833559   69358 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 22:25:17.835536   69358 out.go:179] * Done! kubectl is now configured to use "ha-326307" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7791f71e5d5a5       8c811b4aec35f       14 minutes ago      Running             busybox                   0                   b5e0c0fffea25       busybox-7b57f96db7-m8swj
	ca68bbc020e20       52546a367cc9e       15 minutes ago      Running             coredns                   0                   132023f334782       coredns-66bc5c9577-9j5pw
	1f618dc8f0392       52546a367cc9e       15 minutes ago      Running             coredns                   0                   a5ac32b4949ab       coredns-66bc5c9577-wqvzd
	f52d2d9f5881b       6e38f40d628db       15 minutes ago      Running             storage-provisioner       0                   7b77cca917bf4       storage-provisioner
	365cc00c2e009       409467f978b4a       15 minutes ago      Running             kindnet-cni               0                   96e027ec2b5fb       kindnet-gxnzs
	bd9e41958ffbb       df0860106674d       15 minutes ago      Running             kube-proxy                0                   06da62af16945       kube-proxy-8kxtv
	c6c963d9a0cae       765655ea60781       16 minutes ago      Running             kube-vip                  0                   5717652da0ef4       kube-vip-ha-326307
	456a0c3cbf5ce       46169d968e920       16 minutes ago      Running             kube-scheduler            0                   f02b9e82ff9b1       kube-scheduler-ha-326307
	05ab0247624a7       a0af72f2ec6d6       16 minutes ago      Running             kube-controller-manager   0                   6026f58e8c23a       kube-controller-manager-ha-326307
	e5c59a6abe977       5f1f5298c888d       16 minutes ago      Running             etcd                      0                   5f89382a468ad       etcd-ha-326307
	e80d65e3c7c18       90550c43ad2bc       16 minutes ago      Running             kube-apiserver            0                   3813626701bd1       kube-apiserver-ha-326307
	
	
	==> containerd <==
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.754439323Z" level=info msg="CreateContainer within sandbox \"a5ac32b4949abcc8c1007cd2947e92633d80d759aeaf0e7d6b490f2610f81170\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.768027085Z" level=info msg="CreateContainer within sandbox \"a5ac32b4949abcc8c1007cd2947e92633d80d759aeaf0e7d6b490f2610f81170\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\""
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.768844132Z" level=info msg="StartContainer for \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\""
	Sep 19 22:23:51 ha-326307 containerd[767]: time="2025-09-19T22:23:51.836885904Z" level=info msg="StartContainer for \"1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6\" returns successfully"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.632881043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9j5pw,Uid:7d073e38-b63e-494d-bda0-3dde372a950b,Namespace:kube-system,Attempt:0,}"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.759782586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9j5pw,Uid:7d073e38-b63e-494d-bda0-3dde372a950b,Namespace:kube-system,Attempt:0,} returns sandbox id \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.765750080Z" level=info msg="CreateContainer within sandbox \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.779792584Z" level=info msg="CreateContainer within sandbox \"132023f3347828aa89cc27cb846b63299e7492a9d95f45bd87fa130aee9b5cee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.780572301Z" level=info msg="StartContainer for \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\""
	Sep 19 22:23:55 ha-326307 containerd[767]: time="2025-09-19T22:23:55.854015268Z" level=info msg="StartContainer for \"ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93\" returns successfully"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.151709073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-m8swj,Uid:7533a5f9-7c6d-4476-9e03-eb8abe0aadbc,Namespace:default,Attempt:0,}"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.267660233Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.268098400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-m8swj,Uid:7533a5f9-7c6d-4476-9e03-eb8abe0aadbc,Namespace:default,Attempt:0,} returns sandbox id \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\""
	Sep 19 22:25:19 ha-326307 containerd[767]: time="2025-09-19T22:25:19.270196453Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.412014033Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.413088793Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.414707234Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.417602556Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.418335313Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 2.148090964s"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.418383876Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.423388311Z" level=info msg="CreateContainer within sandbox \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.442455841Z" level=info msg="CreateContainer within sandbox \"b5e0c0fffea25b8c53f5de67f8e65d99323d23e48eb0c0ac619fcba386c566a1\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.443119612Z" level=info msg="StartContainer for \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\""
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.497884940Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 22:25:21 ha-326307 containerd[767]: time="2025-09-19T22:25:21.500641712Z" level=info msg="StartContainer for \"7791f71e5d5a520d6ef052d5759a1050a768d5b2e137e791635bcd0e97251f08\" returns successfully"
	
	
	==> coredns [1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54337 - 24572 "HINFO IN 5143313645322175939.5313042790825403134. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069732464s
	[INFO] 10.244.0.4:35490 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000326279s
	[INFO] 10.244.0.4:55330 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.014239882s
	[INFO] 10.244.1.2:39628 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210602s
	[INFO] 10.244.1.2:46891 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.001261026s
	[INFO] 10.244.1.2:43124 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.00098216s
	[INFO] 10.244.1.2:49555 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.00013424s
	[INFO] 10.244.0.4:40362 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00024135s
	[INFO] 10.244.0.4:45629 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168694s
	[INFO] 10.244.1.2:52354 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189457s
	[INFO] 10.244.1.2:43857 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161715s
	[INFO] 10.244.1.2:51922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145764s
	[INFO] 10.244.1.2:57320 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009888497s
	[INFO] 10.244.1.2:49841 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169285s
	[INFO] 10.244.0.4:51548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159656s
	[INFO] 10.244.0.4:48681 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110507s
	[INFO] 10.244.1.2:52993 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137337s
	[INFO] 10.244.0.4:59855 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113524s
	[INFO] 10.244.1.2:56284 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188608s
	[INFO] 10.244.1.2:58675 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149085s
	[INFO] 10.244.1.2:38911 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131505s
	
	
	==> coredns [ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93] <==
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49588 - 50300 "HINFO IN 9047056621409016881.2982736294753326061. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063128768s
	[INFO] 10.244.0.4:39328 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.014249598s
	[INFO] 10.244.0.4:59759 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.013332957s
	[INFO] 10.244.0.4:50336 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.008865788s
	[INFO] 10.244.1.2:42753 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000158745s
	[INFO] 10.244.0.4:52334 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261159s
	[INFO] 10.244.0.4:43558 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010645205s
	[INFO] 10.244.0.4:51059 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154122s
	[INFO] 10.244.0.4:46147 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012594143s
	[INFO] 10.244.0.4:39163 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143755s
	[INFO] 10.244.0.4:57061 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014731s
	[INFO] 10.244.1.2:59502 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000129746s
	[INFO] 10.244.1.2:49570 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172915s
	[INFO] 10.244.1.2:48519 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000175653s
	[INFO] 10.244.0.4:50569 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000326714s
	[INFO] 10.244.0.4:45465 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000234038s
	[INFO] 10.244.1.2:52569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176154s
	[INFO] 10.244.1.2:36719 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000205481s
	[INFO] 10.244.1.2:58705 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195468s
	[INFO] 10.244.0.4:38035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216169s
	[INFO] 10.244.0.4:52287 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129599s
	[INFO] 10.244.0.4:37285 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000186803s
	[INFO] 10.244.1.2:39163 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165716s
	
	
	==> describe nodes <==
	Name:               ha-326307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_23_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:23:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:39:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:38 +0000   Fri, 19 Sep 2025 22:23:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-326307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 2616418f44a84ee78b49dce19e95d1fb
	  System UUID:                9c3f30ed-68b2-4a1c-af95-9031ae210a78
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-m8swj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-9j5pw             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     15m
	  kube-system                 coredns-66bc5c9577-wqvzd             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     15m
	  kube-system                 etcd-ha-326307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-gxnzs                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-326307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-326307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-8kxtv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-326307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-326307                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           58s                node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	
	
	Name:               ha-326307-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:39:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:38:37 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:38:37 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:38:37 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:38:37 +0000   Fri, 19 Sep 2025 22:24:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-326307-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d091f29783a14552a9d5b1242f416003
	  System UUID:                8095cd89-f43b-4d8a-adef-b40d6aaa7ad2
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tfpvf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-326307-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         15m
	  kube-system                 kindnet-mk6pv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-326307-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-326307-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-q8mtj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-326307-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-326307-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  RegisteredNode           15m                node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node ha-326307-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node ha-326307-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x7 over 64s)  kubelet          Node ha-326307-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           58s                node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	
	
	Name:               ha-326307-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:39:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:34:44 +0000   Fri, 19 Sep 2025 22:24:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-326307-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 1434e19b2a274233a619428a76d99322
	  System UUID:                5814a8d4-c435-490f-8e5e-a8b038e01be7
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-jdczt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-326307-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         15m
	  kube-system                 kindnet-dmxl8                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-326307-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-326307-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-ws89d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-326307-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-326307-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  15m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode  15m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode  15m   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode  58s   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	
	
	==> dmesg <==
	[Sep19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001853] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001006] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.090013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.467231] i8042: Warning: Keylock active
	[  +0.009747] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004210] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001075] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000901] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000908] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001042] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001465] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.534799] block sda: the capability attribute has been deprecated.
	[  +0.099617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027269] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.089616] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [e5c59a6abe97751de42afd27010936d1c3a401fad6cd730e75a1692a895b4fbc] <==
	{"level":"warn","ts":"2025-09-19T22:38:32.740351Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:32.840146Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:32.940745Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:33.040258Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:33.140763Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:33.156661Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:33.240301Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:33.328837Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:33.330376Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:33.340871Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:33.440939Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:33.488769Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:33.541082Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:33.572104Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:33.639888Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:33.639958Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:33.641625Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:38:34.049877Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365"}
	{"level":"info","ts":"2025-09-19T22:38:35.485838Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"e4477a6cd7815365","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-19T22:38:35.485922Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e4477a6cd7815365"}
	{"level":"info","ts":"2025-09-19T22:38:35.485955Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365"}
	{"level":"info","ts":"2025-09-19T22:38:35.486972Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"e4477a6cd7815365","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-19T22:38:35.487038Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365"}
	{"level":"info","ts":"2025-09-19T22:38:35.519565Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365"}
	{"level":"info","ts":"2025-09-19T22:38:35.522650Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e4477a6cd7815365"}
	
	
	==> kernel <==
	 22:39:38 up  1:22,  0 users,  load average: 1.07, 0.91, 0.82
	Linux ha-326307 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [365cc00c2e009eeed7e71d1202a4d406c12f0d9faee38762ba691eb2d7c71f89] <==
	I0919 22:38:50.991240       1 main.go:301] handling current node
	I0919 22:39:00.997442       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:39:00.997480       1 main.go:301] handling current node
	I0919 22:39:00.997495       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:39:00.997500       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:39:00.997712       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:39:00.997728       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:10.992268       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:39:10.992319       1 main.go:301] handling current node
	I0919 22:39:10.992339       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:39:10.992344       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:39:10.992556       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:39:10.992568       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:20.990595       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:39:20.990634       1 main.go:301] handling current node
	I0919 22:39:20.990655       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:39:20.990663       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:39:20.990874       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:39:20.990888       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:30.995276       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:39:30.995312       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:30.995572       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:39:30.995598       1 main.go:301] handling current node
	I0919 22:39:30.995611       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:39:30.995615       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e80d65e3c7c18da87f7fa003e39382f5a4285ba4782fc295197421c6b882a161] <==
	I0919 22:33:36.316232       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:33:41.440724       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:43.430235       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:04.843923       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:47.576277       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:36:07.778568       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:37:07.288814       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:37:22.531524       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43412: use of closed network connection
	E0919 22:37:22.776721       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43434: use of closed network connection
	E0919 22:37:22.970082       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43448: use of closed network connection
	E0919 22:37:23.110093       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43464: use of closed network connection
	E0919 22:37:23.308629       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43484: use of closed network connection
	E0919 22:37:23.494833       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43500: use of closed network connection
	E0919 22:37:23.634448       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43520: use of closed network connection
	E0919 22:37:23.803885       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43532: use of closed network connection
	E0919 22:37:23.968210       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43546: use of closed network connection
	E0919 22:37:26.548300       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43614: use of closed network connection
	E0919 22:37:26.721861       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43630: use of closed network connection
	E0919 22:37:26.901556       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43648: use of closed network connection
	E0919 22:37:27.077249       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43672: use of closed network connection
	E0919 22:37:27.253310       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43700: use of closed network connection
	I0919 22:37:36.706481       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:38:20.868281       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:39:06.005916       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:39:35.100583       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [05ab0247624a7f8ffa6bc948e3abc3adc49911c297291eb0a6dd42e3df39f4cd] <==
	I0919 22:23:38.744532       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:23:38.744726       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0919 22:23:38.744739       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0919 22:23:38.744729       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:23:38.744737       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:23:38.744759       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 22:23:38.745195       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 22:23:38.745255       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:23:38.746448       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:23:38.748706       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 22:23:38.750017       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.750086       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:23:38.751270       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 22:23:38.760899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 22:23:38.760926       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 22:23:38.760971       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 22:23:38.765332       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.771790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:08.307746       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m02\" does not exist"
	I0919 22:24:08.319829       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:24:08.699971       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m02"
	E0919 22:24:31.036531       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8ztpb failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8ztpb\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:24:31.706808       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m03\" does not exist"
	I0919 22:24:31.736561       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:24:33.715916       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m03"
	
	
	==> kube-proxy [bd9e41958ffbbb27dab3d180a56fb27df36a1a1896db3a6f322a8aabcda57677] <==
	I0919 22:23:40.183862       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:23:40.251957       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:23:40.353105       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:23:40.353291       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:23:40.353503       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:23:40.383440       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:23:40.383522       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:23:40.391534       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:23:40.391999       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:23:40.392045       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:23:40.394189       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:23:40.394304       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:23:40.394470       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:23:40.394480       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:23:40.394266       1 config.go:200] "Starting service config controller"
	I0919 22:23:40.394506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:23:40.394279       1 config.go:309] "Starting node config controller"
	I0919 22:23:40.394533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:23:40.394540       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:23:40.494617       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:23:40.494643       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:23:40.494649       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [456a0c3cbf5ce028b9cbac658728c1fee13ad8e2659bfa0c625cd685d711c708] <==
	E0919 22:23:33.115040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:23:33.129278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:23:33.194774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:23:33.337699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0919 22:23:35.055089       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:24:08.346116       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:08.346301       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:08.365410       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-78xs2" node="ha-326307-m02"
	E0919 22:24:08.365600       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="kube-system/kindnet-78xs2"
	E0919 22:24:08.379248       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"kindnet-78xs2\" not found" pod="kube-system/kindnet-78xs2"
	E0919 22:24:10.002296       1 schedule_one.go:975] "Scheduler cache AssumePod failed" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:10.002334       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	I0919 22:24:10.002368       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:31.751287       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-pnj9r" node="ha-326307-m03"
	E0919 22:24:31.751375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-pnj9r"
	E0919 22:24:31.887089       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:31.887576       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 173e48ec-ef56-4824-9f55-a04b199b7943(kube-system/kindnet-qxwpq) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	E0919 22:24:31.887605       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	I0919 22:24:31.888969       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:35.828083       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-nzzlb" node="ha-326307-m03"
	E0919 22:24:35.828187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-nzzlb"
	E0919 22:24:35.839864       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	E0919 22:24:35.839940       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ba4fd407-2e93-4324-ab2d-4f192d79fdf5(kube-system/kindnet-dmxl8) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	E0919 22:24:35.839964       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	I0919 22:24:35.841757       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	
	
	==> kubelet <==
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638035    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-cni-cfg\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638087    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-xtables-lock\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:23:39 ha-326307 kubelet[1670]: I0919 22:23:39.638115    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70be5fcc-7ab6-4eb1-870d-988fee1a01bb-kube-proxy\") pod \"kube-proxy-8kxtv\" (UID: \"70be5fcc-7ab6-4eb1-870d-988fee1a01bb\") " pod="kube-system/kube-proxy-8kxtv"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140870    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64376c4d-1b82-490d-887d-7f628b134014-config-volume\") pod \"coredns-66bc5c9577-wqvzd\" (UID: \"64376c4d-1b82-490d-887d-7f628b134014\") " pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140945    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d073e38-b63e-494d-bda0-3dde372a950b-config-volume\") pod \"coredns-66bc5c9577-9j5pw\" (UID: \"7d073e38-b63e-494d-bda0-3dde372a950b\") " pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.140976    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkhk\" (UniqueName: \"kubernetes.io/projected/64376c4d-1b82-490d-887d-7f628b134014-kube-api-access-8tkhk\") pod \"coredns-66bc5c9577-wqvzd\" (UID: \"64376c4d-1b82-490d-887d-7f628b134014\") " pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.141004    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gmbw\" (UniqueName: \"kubernetes.io/projected/7d073e38-b63e-494d-bda0-3dde372a950b-kube-api-access-8gmbw\") pod \"coredns-66bc5c9577-9j5pw\" (UID: \"7d073e38-b63e-494d-bda0-3dde372a950b\") " pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319752    1670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\""
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319858    1670 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\"" pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319884    1670 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\": failed to find network info for sandbox \"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\"" pod="kube-system/coredns-66bc5c9577-wqvzd"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.319966    1670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wqvzd_kube-system(64376c4d-1b82-490d-887d-7f628b134014)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wqvzd_kube-system(64376c4d-1b82-490d-887d-7f628b134014)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\\\": failed to find network info for sandbox \\\"af2200130e8f39c8e1d2909ad486622b06624f2f496c35a15b1e5e3e0886ef65\\\"\"" pod="kube-system/coredns-66bc5c9577-wqvzd" podUID="64376c4d-1b82-490d-887d-7f628b134014"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332044    1670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\""
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332130    1670 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\"" pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332205    1670 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\": failed to find network info for sandbox \"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\"" pod="kube-system/coredns-66bc5c9577-9j5pw"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: E0919 22:23:40.332288    1670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9j5pw_kube-system(7d073e38-b63e-494d-bda0-3dde372a950b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9j5pw_kube-system(7d073e38-b63e-494d-bda0-3dde372a950b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\\\": failed to find network info for sandbox \\\"533bf94488ad0e7905bcfea90e12375383188d3f9f0d630583575f8855eecb9d\\\"\"" pod="kube-system/coredns-66bc5c9577-9j5pw" podUID="7d073e38-b63e-494d-bda0-3dde372a950b"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.543914    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cafe04c6-2dce-4b93-b6d1-205efc39b360-tmp\") pod \"storage-provisioner\" (UID: \"cafe04c6-2dce-4b93-b6d1-205efc39b360\") " pod="kube-system/storage-provisioner"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.543969    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47vqf\" (UniqueName: \"kubernetes.io/projected/cafe04c6-2dce-4b93-b6d1-205efc39b360-kube-api-access-47vqf\") pod \"storage-provisioner\" (UID: \"cafe04c6-2dce-4b93-b6d1-205efc39b360\") " pod="kube-system/storage-provisioner"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.684901    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gxnzs" podStartSLOduration=1.68487896 podStartE2EDuration="1.68487896s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:40.684630982 +0000 UTC m=+6.151051272" watchObservedRunningTime="2025-09-19 22:23:40.68487896 +0000 UTC m=+6.151299251"
	Sep 19 22:23:40 ha-326307 kubelet[1670]: I0919 22:23:40.685802    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8kxtv" podStartSLOduration=1.685781067 podStartE2EDuration="1.685781067s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:40.670987608 +0000 UTC m=+6.137407898" watchObservedRunningTime="2025-09-19 22:23:40.685781067 +0000 UTC m=+6.152201360"
	Sep 19 22:23:41 ha-326307 kubelet[1670]: I0919 22:23:41.676063    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.676036489 podStartE2EDuration="1.676036489s" podCreationTimestamp="2025-09-19 22:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:41.675998333 +0000 UTC m=+7.142418624" watchObservedRunningTime="2025-09-19 22:23:41.676036489 +0000 UTC m=+7.142456778"
	Sep 19 22:23:45 ha-326307 kubelet[1670]: I0919 22:23:45.164667    1670 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:23:45 ha-326307 kubelet[1670]: I0919 22:23:45.165981    1670 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:23:52 ha-326307 kubelet[1670]: I0919 22:23:52.703916    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wqvzd" podStartSLOduration=13.703896267 podStartE2EDuration="13.703896267s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:52.703429297 +0000 UTC m=+18.169849612" watchObservedRunningTime="2025-09-19 22:23:52.703896267 +0000 UTC m=+18.170316558"
	Sep 19 22:23:56 ha-326307 kubelet[1670]: I0919 22:23:56.724956    1670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9j5pw" podStartSLOduration=17.724936721 podStartE2EDuration="17.724936721s" podCreationTimestamp="2025-09-19 22:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:23:56.724564031 +0000 UTC m=+22.190984322" watchObservedRunningTime="2025-09-19 22:23:56.724936721 +0000 UTC m=+22.191357012"
	Sep 19 22:25:18 ha-326307 kubelet[1670]: I0919 22:25:18.904730    1670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt2kb\" (UniqueName: \"kubernetes.io/projected/7533a5f9-7c6d-4476-9e03-eb8abe0aadbc-kube-api-access-rt2kb\") pod \"busybox-7b57f96db7-m8swj\" (UID: \"7533a5f9-7c6d-4476-9e03-eb8abe0aadbc\") " pod="default/busybox-7b57f96db7-m8swj"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-326307 -n ha-326307
helpers_test.go:269: (dbg) Run:  kubectl --context ha-326307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-jdczt
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-326307 describe pod busybox-7b57f96db7-jdczt
helpers_test.go:290: (dbg) kubectl --context ha-326307 describe pod busybox-7b57f96db7-jdczt:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-jdczt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ha-326307-m03/192.168.49.4
	Start Time:       Fri, 19 Sep 2025 22:25:18 +0000
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwg8l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xwg8l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                  Age                   From               Message
	  ----     ------                  ----                  ----               -------
	  Warning  FailedScheduling        14m                   default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-jdczt": pod busybox-7b57f96db7-jdczt is already assigned to node "ha-326307-m03"
	  Warning  FailedScheduling        14m                   default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-jdczt": pod busybox-7b57f96db7-jdczt is already assigned to node "ha-326307-m03"
	  Normal   Scheduled               14m                   default-scheduler  Successfully assigned default/busybox-7b57f96db7-jdczt to ha-326307-m03
	  Warning  FailedCreatePodSandBox  14m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f949ef20a496c1ef4510b9586bfdf0aa02ea1ca9948f762b5576ef36acab80c9": failed to find network info for sandbox "f949ef20a496c1ef4510b9586bfdf0aa02ea1ca9948f762b5576ef36acab80c9"
	  Warning  FailedCreatePodSandBox  14m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "306b20c8e47aaeb0b6ae068e406157020ceddab45da2b4f2ab7d80c0e47f4391": failed to find network info for sandbox "306b20c8e47aaeb0b6ae068e406157020ceddab45da2b4f2ab7d80c0e47f4391"
	  Warning  FailedCreatePodSandBox  13m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e6c50e24733dc1514dd610f1c51f99bc1a57d10929036ae887a87c4b187b9ac1": failed to find network info for sandbox "e6c50e24733dc1514dd610f1c51f99bc1a57d10929036ae887a87c4b187b9ac1"
	  Warning  FailedCreatePodSandBox  13m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9d4e7715d1c071862624112264db649229347a018044c9075df60fb9940c8e8a": failed to find network info for sandbox "9d4e7715d1c071862624112264db649229347a018044c9075df60fb9940c8e8a"
	  Warning  FailedCreatePodSandBox  13m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d188384b76e1cf43ce05a368351c54023e455d3fd0fddf79dc0d717558b93ee6": failed to find network info for sandbox "d188384b76e1cf43ce05a368351c54023e455d3fd0fddf79dc0d717558b93ee6"
	  Warning  FailedCreatePodSandBox  13m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "726dbde7347664ddd373a329867d125e92a7173ca43b01448ce154579a81a0bb": failed to find network info for sandbox "726dbde7347664ddd373a329867d125e92a7173ca43b01448ce154579a81a0bb"
	  Warning  FailedCreatePodSandBox  13m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e45f823e38e8e10ed14077a52e1750763b0c366d8a775bbf53f656c10861f185": failed to find network info for sandbox "e45f823e38e8e10ed14077a52e1750763b0c366d8a775bbf53f656c10861f185"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "92ae39dcfd89289f5a5fdc5ae0c23196a91b58214916aead7f95620a2697c009": failed to find network info for sandbox "92ae39dcfd89289f5a5fdc5ae0c23196a91b58214916aead7f95620a2697c009"
	  Warning  FailedCreatePodSandBox  12m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "98a14fefd702a6a8ff4d95d8bebac62053e439ee134d840d76e175ba4e8c45d6": failed to find network info for sandbox "98a14fefd702a6a8ff4d95d8bebac62053e439ee134d840d76e175ba4e8c45d6"
	  Warning  FailedCreatePodSandBox  4m13s (x39 over 12m)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "68cb483b1808e73ea325cca055c7a7f1bd2a591a81aa8b2bcb8cb96560fd08b2": failed to find network info for sandbox "68cb483b1808e73ea325cca055c7a7f1bd2a591a81aa8b2bcb8cb96560fd08b2"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (66.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (425.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-326307 stop --alsologtostderr -v 5: (25.994014163s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 start --wait true --alsologtostderr -v 5
E0919 22:42:11.701372   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:42:25.077418   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:43:48.144797   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 start --wait true --alsologtostderr -v 5: exit status 80 (6m36.077041165s)

                                                
                                                
-- stdout --
	* [ha-326307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-326307" primary control-plane node in "ha-326307" cluster
	* Pulling base image v0.0.48 ...
	* Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	* Enabled addons: 
	
	* Starting "ha-326307-m02" control-plane node in "ha-326307" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-326307-m03" control-plane node in "ha-326307" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	* Starting "ha-326307-m04" worker node in "ha-326307" cluster
	* Pulling base image v0.0.48 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:40:06.378966  102947 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:40:06.379330  102947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:40:06.379341  102947 out.go:374] Setting ErrFile to fd 2...
	I0919 22:40:06.379345  102947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:40:06.379571  102947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:40:06.380057  102947 out.go:368] Setting JSON to false
	I0919 22:40:06.381142  102947 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4950,"bootTime":1758316656,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:40:06.381289  102947 start.go:140] virtualization: kvm guest
	I0919 22:40:06.383708  102947 out.go:179] * [ha-326307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:40:06.385240  102947 notify.go:220] Checking for updates...
	I0919 22:40:06.385299  102947 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:40:06.386659  102947 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:40:06.388002  102947 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:40:06.389281  102947 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 22:40:06.390761  102947 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:40:06.392296  102947 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:40:06.394377  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:06.394567  102947 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:40:06.419564  102947 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:40:06.419671  102947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:40:06.482479  102947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:40:06.471430741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:40:06.482585  102947 docker.go:318] overlay module found
	I0919 22:40:06.484475  102947 out.go:179] * Using the docker driver based on existing profile
	I0919 22:40:06.485822  102947 start.go:304] selected driver: docker
	I0919 22:40:06.485843  102947 start.go:918] validating driver "docker" against &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:40:06.485989  102947 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:40:06.486131  102947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:40:06.542030  102947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:40:06.531788772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:40:06.542709  102947 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:40:06.542747  102947 cni.go:84] Creating CNI manager for ""
	I0919 22:40:06.542808  102947 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:40:06.542862  102947 start.go:348] cluster config:
	{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:40:06.544976  102947 out.go:179] * Starting "ha-326307" primary control-plane node in "ha-326307" cluster
	I0919 22:40:06.546636  102947 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:40:06.548781  102947 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:40:06.550349  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:06.550411  102947 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 22:40:06.550421  102947 cache.go:58] Caching tarball of preloaded images
	I0919 22:40:06.550484  102947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:40:06.550539  102947 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:40:06.550548  102947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:40:06.550672  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:06.573025  102947 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:40:06.573049  102947 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:40:06.573066  102947 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:40:06.573093  102947 start.go:360] acquireMachinesLock for ha-326307: {Name:mk42b79b90944aab63c8b37c2f94e04ca1ebec1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:40:06.573185  102947 start.go:364] duration metric: took 59.872µs to acquireMachinesLock for "ha-326307"
	I0919 22:40:06.573210  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:40:06.573217  102947 fix.go:54] fixHost starting: 
	I0919 22:40:06.573525  102947 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:40:06.592648  102947 fix.go:112] recreateIfNeeded on ha-326307: state=Stopped err=<nil>
	W0919 22:40:06.592678  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:40:06.594861  102947 out.go:252] * Restarting existing docker container for "ha-326307" ...
	I0919 22:40:06.594935  102947 cli_runner.go:164] Run: docker start ha-326307
	I0919 22:40:06.849585  102947 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:40:06.870075  102947 kic.go:430] container "ha-326307" state is running.
	I0919 22:40:06.870543  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:40:06.891652  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:06.891897  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:40:06.891960  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:06.913541  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:06.913830  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32819 <nil> <nil>}
	I0919 22:40:06.913845  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:40:06.914579  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60650->127.0.0.1:32819: read: connection reset by peer
	I0919 22:40:10.057342  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:40:10.057370  102947 ubuntu.go:182] provisioning hostname "ha-326307"
	I0919 22:40:10.057448  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:10.076664  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:10.076914  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32819 <nil> <nil>}
	I0919 22:40:10.076932  102947 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307 && echo "ha-326307" | sudo tee /etc/hostname
	I0919 22:40:10.228297  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:40:10.228362  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:10.247319  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:10.247573  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32819 <nil> <nil>}
	I0919 22:40:10.247594  102947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:40:10.386261  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:40:10.386297  102947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:40:10.386346  102947 ubuntu.go:190] setting up certificates
	I0919 22:40:10.386360  102947 provision.go:84] configureAuth start
	I0919 22:40:10.386416  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:40:10.407761  102947 provision.go:143] copyHostCerts
	I0919 22:40:10.407810  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:10.407855  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:40:10.407875  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:10.407957  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:40:10.408069  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:10.408095  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:40:10.408103  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:10.408148  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:40:10.408242  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:10.408268  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:40:10.408278  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:10.408327  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:40:10.408399  102947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307 san=[127.0.0.1 192.168.49.2 ha-326307 localhost minikube]
	I0919 22:40:10.713645  102947 provision.go:177] copyRemoteCerts
	I0919 22:40:10.713742  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:40:10.713785  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:10.733589  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:10.833003  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:40:10.833079  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:40:10.860656  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:40:10.860740  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 22:40:10.888926  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:40:10.889032  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:40:10.916393  102947 provision.go:87] duration metric: took 530.019982ms to configureAuth
	I0919 22:40:10.916415  102947 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:40:10.916623  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:10.916638  102947 machine.go:96] duration metric: took 4.024727048s to provisionDockerMachine
	I0919 22:40:10.916646  102947 start.go:293] postStartSetup for "ha-326307" (driver="docker")
	I0919 22:40:10.916656  102947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:40:10.916705  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:40:10.916774  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:10.935896  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:11.036597  102947 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:40:11.040388  102947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:40:11.040431  102947 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:40:11.040440  102947 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:40:11.040446  102947 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:40:11.040457  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:40:11.040518  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:40:11.040597  102947 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:40:11.040608  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:40:11.040710  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:40:11.050512  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:11.077986  102947 start.go:296] duration metric: took 161.32783ms for postStartSetup
	I0919 22:40:11.078088  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:40:11.078139  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:11.099514  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:11.193605  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:40:11.198421  102947 fix.go:56] duration metric: took 4.625199971s for fixHost
	I0919 22:40:11.198447  102947 start.go:83] releasing machines lock for "ha-326307", held for 4.625246732s
	I0919 22:40:11.198524  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:40:11.217572  102947 ssh_runner.go:195] Run: cat /version.json
	I0919 22:40:11.217596  102947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:40:11.217615  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:11.217666  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:11.238048  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:11.238195  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:11.415017  102947 ssh_runner.go:195] Run: systemctl --version
	I0919 22:40:11.420537  102947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:40:11.425907  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:40:11.447016  102947 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:40:11.447107  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:40:11.457668  102947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:40:11.457703  102947 start.go:495] detecting cgroup driver to use...
	I0919 22:40:11.457740  102947 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:40:11.457803  102947 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:40:11.473712  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:40:11.486915  102947 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:40:11.486970  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:40:11.501818  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:40:11.514985  102947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:40:11.582004  102947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:40:11.651320  102947 docker.go:234] disabling docker service ...
	I0919 22:40:11.651379  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:40:11.665822  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:40:11.678416  102947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:40:11.746878  102947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:40:11.815384  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:40:11.828348  102947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:40:11.847640  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:40:11.859649  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:40:11.871696  102947 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:40:11.871768  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:40:11.883197  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:11.894832  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:40:11.906582  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:11.918458  102947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:40:11.929108  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:40:11.940521  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:40:11.952577  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:40:11.963963  102947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:40:11.974367  102947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:40:11.985259  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:12.050391  102947 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:40:12.169871  102947 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:40:12.169947  102947 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:40:12.174079  102947 start.go:563] Will wait 60s for crictl version
	I0919 22:40:12.174139  102947 ssh_runner.go:195] Run: which crictl
	I0919 22:40:12.177946  102947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:40:12.213111  102947 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:40:12.213183  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:12.237742  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:12.267221  102947 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:40:12.268667  102947 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:40:12.287123  102947 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:40:12.291375  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:12.304417  102947 kubeadm.go:875] updating cluster {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:40:12.304576  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:12.304623  102947 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:40:12.341103  102947 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:40:12.341184  102947 containerd.go:534] Images already preloaded, skipping extraction
	I0919 22:40:12.341271  102947 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:40:12.378884  102947 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:40:12.378907  102947 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:40:12.378916  102947 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0919 22:40:12.379030  102947 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:40:12.379093  102947 ssh_runner.go:195] Run: sudo crictl info
	I0919 22:40:12.415076  102947 cni.go:84] Creating CNI manager for ""
	I0919 22:40:12.415100  102947 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:40:12.415111  102947 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:40:12.415129  102947 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-326307 NodeName:ha-326307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:40:12.415290  102947 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-326307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:40:12.415312  102947 kube-vip.go:115] generating kube-vip config ...
	I0919 22:40:12.415360  102947 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:40:12.428658  102947 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:40:12.428770  102947 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:40:12.428823  102947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:40:12.438647  102947 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:40:12.438722  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:40:12.448707  102947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 22:40:12.468517  102947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:40:12.488929  102947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0919 22:40:12.510232  102947 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:40:12.530559  102947 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:40:12.534624  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:12.548237  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:12.611595  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:12.634054  102947 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.2
	I0919 22:40:12.634076  102947 certs.go:194] generating shared ca certs ...
	I0919 22:40:12.634091  102947 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:12.634256  102947 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:40:12.634323  102947 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:40:12.634335  102947 certs.go:256] generating profile certs ...
	I0919 22:40:12.634435  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:40:12.634462  102947 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.12c02704
	I0919 22:40:12.634473  102947 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.12c02704 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:40:12.848520  102947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.12c02704 ...
	I0919 22:40:12.848550  102947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.12c02704: {Name:mkec91c90022534b703be5f6d2ae62638fdba9da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:12.848737  102947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.12c02704 ...
	I0919 22:40:12.848755  102947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.12c02704: {Name:mka1bfb464462bf578809e209441ee38ad333adf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:12.848871  102947 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.12c02704 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:40:12.849067  102947 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.12c02704 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:40:12.849277  102947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:40:12.849295  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:40:12.849315  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:40:12.849337  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:40:12.849355  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:40:12.849373  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:40:12.849392  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:40:12.849410  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:40:12.849430  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:40:12.849610  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:40:12.849684  102947 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:40:12.849700  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:40:12.849733  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:40:12.849775  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:40:12.849812  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:40:12.849872  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:12.849915  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:40:12.849936  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:40:12.849955  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:12.850570  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:40:12.881412  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:40:12.909365  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:40:12.936570  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:40:12.963699  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:40:12.991460  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:40:13.019268  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:40:13.046670  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:40:13.074069  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:40:13.101424  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:40:13.128690  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:40:13.156653  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:40:13.179067  102947 ssh_runner.go:195] Run: openssl version
	I0919 22:40:13.187620  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:40:13.203083  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:40:13.209838  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:40:13.209911  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:40:13.220919  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:40:13.238903  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:40:13.253729  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:40:13.261626  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:40:13.261780  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:40:13.272880  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:40:13.287661  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:40:13.303848  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:13.308762  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:13.308833  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:13.319788  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:40:13.336323  102947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:40:13.343266  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:40:13.355799  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:40:13.367939  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:40:13.378087  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:40:13.388839  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:40:13.399528  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:40:13.412341  102947 kubeadm.go:392] StartCluster: {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:40:13.412499  102947 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 22:40:13.412584  102947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:40:13.476121  102947 cri.go:89] found id: "83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad"
	I0919 22:40:13.476178  102947 cri.go:89] found id: "63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284"
	I0919 22:40:13.476184  102947 cri.go:89] found id: "7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c"
	I0919 22:40:13.476189  102947 cri.go:89] found id: "c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6"
	I0919 22:40:13.476197  102947 cri.go:89] found id: "e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5"
	I0919 22:40:13.476204  102947 cri.go:89] found id: "d1c181558b58c3ebc7dae5df97b80119a8c6d18dc19b0532b61999c4db0c6668"
	I0919 22:40:13.476209  102947 cri.go:89] found id: "ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93"
	I0919 22:40:13.476214  102947 cri.go:89] found id: "1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6"
	I0919 22:40:13.476221  102947 cri.go:89] found id: "f52d2d9f5881b5d50f95a3aeef2c876d51dd6b2be6c464a7464b26e8175b0fc6"
	I0919 22:40:13.476232  102947 cri.go:89] found id: "365cc00c2e009eeed7e71d1202a4d406c12f0d9faee38762ba691eb2d7c71f89"
	I0919 22:40:13.476255  102947 cri.go:89] found id: "bd9e41958ffbbb27dab3d180a56fb27df36a1a1896db3a6f322a8aabcda57677"
	I0919 22:40:13.476262  102947 cri.go:89] found id: "456a0c3cbf5ce028b9cbac658728c1fee13ad8e2659bfa0c625cd685d711c708"
	I0919 22:40:13.476267  102947 cri.go:89] found id: "05ab0247624a7f8ffa6bc948e3abc3adc49911c297291eb0a6dd42e3df39f4cd"
	I0919 22:40:13.476272  102947 cri.go:89] found id: "e5c59a6abe97751de42afd27010936d1c3a401fad6cd730e75a1692a895b4fbc"
	I0919 22:40:13.476278  102947 cri.go:89] found id: "e80d65e3c7c18da87f7fa003e39382f5a4285ba4782fc295197421c6b882a161"
	I0919 22:40:13.476285  102947 cri.go:89] found id: ""
	I0919 22:40:13.476358  102947 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0919 22:40:13.511540  102947 cri.go:116] JSON = [{"ociVersion":"1.2.0","id":"35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92","pid":903,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92/rootfs","created":"2025-09-19T22:40:13.265497632Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-ha-326307_57c850ed4c5abebc96f109c9dc04f98c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-ha-3263
07","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"57c850ed4c5abebc96f109c9dc04f98c"},"owner":"root"},{"ociVersion":"1.2.0","id":"4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f","pid":851,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f/rootfs","created":"2025-09-19T22:40:13.237289545Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-ha-326307_f6c96a149704fe94a8f3f9671ba1a8ff","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f6c96a149704fe94a8f3f9671ba1a8ff"},"owner":"root"},{"ociVersion":"1.2.0","id":"63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284","pid":1109,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284/rootfs","created":"2025-09-19T22:40:13.452193435Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri.sandbox-id":"b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248","io.kubernetes.cri.sandbox-name":"kube-scheduler-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-s
ystem","io.kubernetes.cri.sandbox-uid":"02be84f36b44ed11e0db130395870414"},"owner":"root"},{"ociVersion":"1.2.0","id":"7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c","pid":1081,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c/rootfs","created":"2025-09-19T22:40:13.445726517Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri.sandbox-id":"35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92","io.kubernetes.cri.sandbox-name":"kube-controller-manager-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"57c850ed4c5abebc96f109c9dc04f98c"},"owner":"r
oot"},{"ociVersion":"1.2.0","id":"8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d","pid":926,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d/rootfs","created":"2025-09-19T22:40:13.291697374Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-vip-ha-326307_11fc7e0ddcb5f54efe3aa73e9d205abc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-vip-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-ui
d":"11fc7e0ddcb5f54efe3aa73e9d205abc"},"owner":"root"},{"ociVersion":"1.2.0","id":"83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad","pid":1117,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad/rootfs","created":"2025-09-19T22:40:13.459929825Z","annotations":{"io.kubernetes.cri.container-name":"kube-vip","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri.sandbox-id":"8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d","io.kubernetes.cri.sandbox-name":"kube-vip-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"11fc7e0ddcb5f54efe3aa73e9d205abc"},"owner":"root"},{"ociVersion":"1.2.0","id":"a85600718119d4e698fdb29d033016634be8e463e4c29a4
b1f9b6778b83c3910","pid":850,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910/rootfs","created":"2025-09-19T22:40:13.246511214Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-ha-326307_044bbdcbe96821df073716c7f05fb17d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"044bbdcbe96821df073716c7f05fb17d"},"owner":"root"},{"ociVersion":"1.2.0","id":"b84e
223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248","pid":911,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248/rootfs","created":"2025-09-19T22:40:13.280883406Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-ha-326307_02be84f36b44ed11e0db130395870414","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"02be84f36b44ed11e0db
130395870414"},"owner":"root"},{"ociVersion":"1.2.0","id":"c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6","pid":1090,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6/rootfs","created":"2025-09-19T22:40:13.443035858Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910","io.kubernetes.cri.sandbox-name":"etcd-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"044bbdcbe96821df073716c7f05fb17d"},"owner":"root"},{"ociVersion":"1.2.0","id":"e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5","pid":1007,"statu
s":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5/rootfs","created":"2025-09-19T22:40:13.41525993Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri.sandbox-id":"4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f","io.kubernetes.cri.sandbox-name":"kube-apiserver-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f6c96a149704fe94a8f3f9671ba1a8ff"},"owner":"root"}]
	I0919 22:40:13.511763  102947 cri.go:126] list returned 10 containers
	I0919 22:40:13.511789  102947 cri.go:129] container: {ID:35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92 Status:running}
	I0919 22:40:13.511829  102947 cri.go:131] skipping 35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92 - not in ps
	I0919 22:40:13.511840  102947 cri.go:129] container: {ID:4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f Status:running}
	I0919 22:40:13.511848  102947 cri.go:131] skipping 4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f - not in ps
	I0919 22:40:13.511854  102947 cri.go:129] container: {ID:63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284 Status:running}
	I0919 22:40:13.511864  102947 cri.go:135] skipping {63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284 running}: state = "running", want "paused"
	I0919 22:40:13.511877  102947 cri.go:129] container: {ID:7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c Status:running}
	I0919 22:40:13.511890  102947 cri.go:135] skipping {7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c running}: state = "running", want "paused"
	I0919 22:40:13.511898  102947 cri.go:129] container: {ID:8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d Status:running}
	I0919 22:40:13.511910  102947 cri.go:131] skipping 8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d - not in ps
	I0919 22:40:13.511916  102947 cri.go:129] container: {ID:83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad Status:running}
	I0919 22:40:13.511925  102947 cri.go:135] skipping {83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad running}: state = "running", want "paused"
	I0919 22:40:13.511935  102947 cri.go:129] container: {ID:a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910 Status:running}
	I0919 22:40:13.511941  102947 cri.go:131] skipping a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910 - not in ps
	I0919 22:40:13.511946  102947 cri.go:129] container: {ID:b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248 Status:running}
	I0919 22:40:13.511951  102947 cri.go:131] skipping b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248 - not in ps
	I0919 22:40:13.511957  102947 cri.go:129] container: {ID:c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6 Status:running}
	I0919 22:40:13.511969  102947 cri.go:135] skipping {c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6 running}: state = "running", want "paused"
	I0919 22:40:13.511976  102947 cri.go:129] container: {ID:e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5 Status:running}
	I0919 22:40:13.511988  102947 cri.go:135] skipping {e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5 running}: state = "running", want "paused"
	I0919 22:40:13.512041  102947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:40:13.524546  102947 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:40:13.524567  102947 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:40:13.524627  102947 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:40:13.537544  102947 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:40:13.538084  102947 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-326307" does not appear in /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:40:13.538273  102947 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14678/kubeconfig needs updating (will repair): [kubeconfig missing "ha-326307" cluster setting kubeconfig missing "ha-326307" context setting]
	I0919 22:40:13.538666  102947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:13.539452  102947 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:40:13.540084  102947 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:40:13.540104  102947 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:40:13.540111  102947 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:40:13.540118  102947 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:40:13.540125  102947 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:40:13.540609  102947 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:40:13.540743  102947 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:40:13.555466  102947 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:40:13.555575  102947 kubeadm.go:593] duration metric: took 31.000137ms to restartPrimaryControlPlane
	I0919 22:40:13.555603  102947 kubeadm.go:394] duration metric: took 143.274252ms to StartCluster
	I0919 22:40:13.555651  102947 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:13.555800  102947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:40:13.556731  102947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:13.557204  102947 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:40:13.557402  102947 start.go:241] waiting for startup goroutines ...
	I0919 22:40:13.557267  102947 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:40:13.557510  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:13.561726  102947 out.go:179] * Enabled addons: 
	I0919 22:40:13.563479  102947 addons.go:514] duration metric: took 6.21303ms for enable addons: enabled=[]
	I0919 22:40:13.563535  102947 start.go:246] waiting for cluster config update ...
	I0919 22:40:13.563548  102947 start.go:255] writing updated cluster config ...
	I0919 22:40:13.565943  102947 out.go:203] 
	I0919 22:40:13.568105  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:13.568246  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:13.570538  102947 out.go:179] * Starting "ha-326307-m02" control-plane node in "ha-326307" cluster
	I0919 22:40:13.572566  102947 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:40:13.574955  102947 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:40:13.576797  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:13.576835  102947 cache.go:58] Caching tarball of preloaded images
	I0919 22:40:13.576935  102947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:40:13.576982  102947 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:40:13.576999  102947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:40:13.577147  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:13.603282  102947 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:40:13.603304  102947 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:40:13.603323  102947 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:40:13.603356  102947 start.go:360] acquireMachinesLock for ha-326307-m02: {Name:mk4919a9b19250804b0f53d01bcd11efaf9a431f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:40:13.603419  102947 start.go:364] duration metric: took 47.152µs to acquireMachinesLock for "ha-326307-m02"
	I0919 22:40:13.603445  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:40:13.603459  102947 fix.go:54] fixHost starting: m02
	I0919 22:40:13.603697  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:40:13.626324  102947 fix.go:112] recreateIfNeeded on ha-326307-m02: state=Stopped err=<nil>
	W0919 22:40:13.626352  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:40:13.629640  102947 out.go:252] * Restarting existing docker container for "ha-326307-m02" ...
	I0919 22:40:13.629728  102947 cli_runner.go:164] Run: docker start ha-326307-m02
	I0919 22:40:13.926841  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:40:13.950131  102947 kic.go:430] container "ha-326307-m02" state is running.
	I0919 22:40:13.950515  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:40:13.973194  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:13.973503  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:40:13.973577  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:13.996029  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:13.996469  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32824 <nil> <nil>}
	I0919 22:40:13.996495  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:40:13.997409  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55282->127.0.0.1:32824: read: connection reset by peer
	I0919 22:40:17.135269  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:40:17.135298  102947 ubuntu.go:182] provisioning hostname "ha-326307-m02"
	I0919 22:40:17.135359  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:17.155772  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:17.156086  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32824 <nil> <nil>}
	I0919 22:40:17.156103  102947 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m02 && echo "ha-326307-m02" | sudo tee /etc/hostname
	I0919 22:40:17.308282  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:40:17.308354  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:17.329394  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:17.329602  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32824 <nil> <nil>}
	I0919 22:40:17.329620  102947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:40:17.469105  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:40:17.469136  102947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:40:17.469173  102947 ubuntu.go:190] setting up certificates
	I0919 22:40:17.469188  102947 provision.go:84] configureAuth start
	I0919 22:40:17.469243  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:40:17.489456  102947 provision.go:143] copyHostCerts
	I0919 22:40:17.489512  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:17.489551  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:40:17.489560  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:17.489629  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:40:17.489711  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:17.489728  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:40:17.489735  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:17.489771  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:40:17.489846  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:17.489864  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:40:17.489870  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:17.489896  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:40:17.489952  102947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m02 san=[127.0.0.1 192.168.49.3 ha-326307-m02 localhost minikube]
	I0919 22:40:17.687121  102947 provision.go:177] copyRemoteCerts
	I0919 22:40:17.687196  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:40:17.687230  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:17.706618  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:17.805482  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:40:17.805552  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:40:17.834469  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:40:17.834533  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:40:17.862491  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:40:17.862578  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:40:17.891048  102947 provision.go:87] duration metric: took 421.847088ms to configureAuth
	I0919 22:40:17.891077  102947 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:40:17.891323  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:17.891337  102947 machine.go:96] duration metric: took 3.917817402s to provisionDockerMachine
	I0919 22:40:17.891348  102947 start.go:293] postStartSetup for "ha-326307-m02" (driver="docker")
	I0919 22:40:17.891362  102947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:40:17.891426  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:40:17.891475  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:17.911877  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:18.017574  102947 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:40:18.021564  102947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:40:18.021608  102947 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:40:18.021620  102947 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:40:18.021627  102947 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:40:18.021641  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:40:18.021732  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:40:18.021827  102947 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:40:18.021845  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:40:18.021965  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:40:18.037625  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:18.072355  102947 start.go:296] duration metric: took 180.992211ms for postStartSetup
	I0919 22:40:18.072434  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:40:18.072488  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:18.097080  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:18.200976  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:40:18.207724  102947 fix.go:56] duration metric: took 4.604261714s for fixHost
	I0919 22:40:18.207752  102947 start.go:83] releasing machines lock for "ha-326307-m02", held for 4.604318809s
	I0919 22:40:18.207819  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:40:18.233683  102947 out.go:179] * Found network options:
	I0919 22:40:18.235326  102947 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:40:18.236979  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:40:18.237024  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:40:18.237101  102947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:40:18.237148  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:18.237186  102947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:40:18.237248  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:18.262883  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:18.265825  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:18.472261  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:40:18.501316  102947 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:40:18.501403  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:40:18.517881  102947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:40:18.517907  102947 start.go:495] detecting cgroup driver to use...
	I0919 22:40:18.517943  102947 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:40:18.518009  102947 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:40:18.540215  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:40:18.558468  102947 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:40:18.558538  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:40:18.578938  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:40:18.606098  102947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:40:18.738984  102947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:40:18.861135  102947 docker.go:234] disabling docker service ...
	I0919 22:40:18.861295  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:40:18.889797  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:40:18.903559  102947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:40:19.020834  102947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:40:19.210102  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:40:19.253298  102947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:40:19.294451  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:40:19.314809  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:40:19.329896  102947 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:40:19.329968  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:40:19.344499  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:19.359934  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:40:19.375426  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:19.390525  102947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:40:19.405742  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:40:19.419676  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:40:19.433744  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:40:19.447497  102947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:40:19.459701  102947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:40:19.472280  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:19.590393  102947 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:40:19.844194  102947 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:40:19.844268  102947 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:40:19.848691  102947 start.go:563] Will wait 60s for crictl version
	I0919 22:40:19.848750  102947 ssh_runner.go:195] Run: which crictl
	I0919 22:40:19.852912  102947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:40:19.896612  102947 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:40:19.896665  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:19.922108  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:19.951040  102947 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:40:19.952600  102947 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:40:19.954094  102947 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:40:19.972221  102947 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:40:19.976367  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:19.988586  102947 mustload.go:65] Loading cluster: ha-326307
	I0919 22:40:19.988826  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:19.989048  102947 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:40:20.009691  102947 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:40:20.009938  102947 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.3
	I0919 22:40:20.009958  102947 certs.go:194] generating shared ca certs ...
	I0919 22:40:20.009977  102947 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:20.010097  102947 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:40:20.010186  102947 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:40:20.010200  102947 certs.go:256] generating profile certs ...
	I0919 22:40:20.010274  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:40:20.010317  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.d9fee4c2
	I0919 22:40:20.010351  102947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:40:20.010361  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:40:20.010388  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:40:20.010403  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:40:20.010415  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:40:20.010427  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:40:20.010440  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:40:20.010451  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:40:20.010463  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:40:20.010507  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:40:20.010541  102947 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:40:20.010552  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:40:20.010572  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:40:20.010593  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:40:20.010613  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:40:20.010656  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:20.010681  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:40:20.010696  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:20.010706  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:40:20.010750  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:20.034999  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:20.130696  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:40:20.137701  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:40:20.181406  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:40:20.188123  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:40:20.209898  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:40:20.217560  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:40:20.265391  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:40:20.271849  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:40:20.306378  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:40:20.313419  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:40:20.338279  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:40:20.344910  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:40:20.368606  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:40:20.417189  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:40:20.473868  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:40:20.554542  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:40:20.629092  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:40:20.678888  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:40:20.722550  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:40:20.778639  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:40:20.828112  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:40:20.884904  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:40:20.936206  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:40:20.979746  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:40:21.011968  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:40:21.037922  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:40:21.058425  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:40:21.078533  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:40:21.099029  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:40:21.125522  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:40:21.151265  102947 ssh_runner.go:195] Run: openssl version
	I0919 22:40:21.157938  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:40:21.169944  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:40:21.174243  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:40:21.174339  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:40:21.182194  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:40:21.195623  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:40:21.210343  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:21.216012  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:21.216080  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:21.226359  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:40:21.239970  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:40:21.256305  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:40:21.263490  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:40:21.263550  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:40:21.274306  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:40:21.289549  102947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:40:21.294844  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:40:21.305190  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:40:21.317466  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:40:21.327473  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:40:21.337404  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:40:21.346840  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:40:21.355241  102947 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0919 22:40:21.355365  102947 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:40:21.355400  102947 kube-vip.go:115] generating kube-vip config ...
	I0919 22:40:21.355447  102947 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:40:21.372568  102947 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:40:21.372652  102947 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:40:21.372715  102947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:40:21.385812  102947 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:40:21.385902  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:40:21.396920  102947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:40:21.418422  102947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:40:21.441221  102947 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:40:21.461293  102947 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:40:21.465499  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:21.479394  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:21.609276  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:21.625324  102947 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:40:21.625678  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:21.627937  102947 out.go:179] * Verifying Kubernetes components...
	I0919 22:40:21.629432  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:21.754519  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:21.770966  102947 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:40:21.771034  102947 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:40:21.771308  102947 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m02" to be "Ready" ...
	I0919 22:40:21.780317  102947 node_ready.go:49] node "ha-326307-m02" is "Ready"
	I0919 22:40:21.780344  102947 node_ready.go:38] duration metric: took 9.008043ms for node "ha-326307-m02" to be "Ready" ...
	I0919 22:40:21.780357  102947 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:40:21.780412  102947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:40:21.794097  102947 api_server.go:72] duration metric: took 168.727042ms to wait for apiserver process to appear ...
	I0919 22:40:21.794124  102947 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:40:21.794147  102947 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:40:21.800333  102947 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:40:21.801474  102947 api_server.go:141] control plane version: v1.34.0
	I0919 22:40:21.801509  102947 api_server.go:131] duration metric: took 7.377354ms to wait for apiserver health ...
	I0919 22:40:21.801520  102947 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:40:21.810182  102947 system_pods.go:59] 24 kube-system pods found
	I0919 22:40:21.810226  102947 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:21.810244  102947 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:21.810254  102947 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:21.810262  102947 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:21.810268  102947 system_pods.go:61] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running
	I0919 22:40:21.810276  102947 system_pods.go:61] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:40:21.810281  102947 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:40:21.810292  102947 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:40:21.810300  102947 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:21.810311  102947 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:21.810315  102947 system_pods.go:61] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running
	I0919 22:40:21.810325  102947 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:21.810332  102947 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:21.810336  102947 system_pods.go:61] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running
	I0919 22:40:21.810340  102947 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:40:21.810344  102947 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:40:21.810348  102947 system_pods.go:61] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:40:21.810353  102947 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:21.810361  102947 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:21.810365  102947 system_pods.go:61] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running
	I0919 22:40:21.810369  102947 system_pods.go:61] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:40:21.810372  102947 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:40:21.810375  102947 system_pods.go:61] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:40:21.810378  102947 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:40:21.810383  102947 system_pods.go:74] duration metric: took 8.856915ms to wait for pod list to return data ...
	I0919 22:40:21.810390  102947 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:40:21.813818  102947 default_sa.go:45] found service account: "default"
	I0919 22:40:21.813853  102947 default_sa.go:55] duration metric: took 3.458375ms for default service account to be created ...
	I0919 22:40:21.813864  102947 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:40:21.820987  102947 system_pods.go:86] 24 kube-system pods found
	I0919 22:40:21.821019  102947 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:21.821027  102947 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:21.821034  102947 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:21.821040  102947 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:21.821044  102947 system_pods.go:89] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running
	I0919 22:40:21.821048  102947 system_pods.go:89] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:40:21.821051  102947 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:40:21.821054  102947 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:40:21.821059  102947 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:21.821064  102947 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:21.821068  102947 system_pods.go:89] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running
	I0919 22:40:21.821074  102947 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:21.821079  102947 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:21.821083  102947 system_pods.go:89] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running
	I0919 22:40:21.821087  102947 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:40:21.821090  102947 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:40:21.821095  102947 system_pods.go:89] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:40:21.821100  102947 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:21.821107  102947 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:21.821114  102947 system_pods.go:89] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running
	I0919 22:40:21.821118  102947 system_pods.go:89] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:40:21.821121  102947 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:40:21.821124  102947 system_pods.go:89] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:40:21.821127  102947 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:40:21.821133  102947 system_pods.go:126] duration metric: took 7.263023ms to wait for k8s-apps to be running ...
	I0919 22:40:21.821142  102947 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:40:21.821209  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:40:21.835069  102947 system_svc.go:56] duration metric: took 13.918083ms WaitForService to wait for kubelet
	I0919 22:40:21.835096  102947 kubeadm.go:578] duration metric: took 209.729975ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:40:21.835114  102947 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:40:21.839112  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:21.839140  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:21.839183  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:21.839191  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:21.839198  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:21.839203  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:21.839208  102947 node_conditions.go:105] duration metric: took 4.090003ms to run NodePressure ...
	I0919 22:40:21.839223  102947 start.go:241] waiting for startup goroutines ...
	I0919 22:40:21.839260  102947 start.go:255] writing updated cluster config ...
	I0919 22:40:21.841908  102947 out.go:203] 
	I0919 22:40:21.843889  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:21.844011  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:21.846125  102947 out.go:179] * Starting "ha-326307-m03" control-plane node in "ha-326307" cluster
	I0919 22:40:21.848304  102947 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:40:21.850127  102947 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:40:21.851602  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:21.851635  102947 cache.go:58] Caching tarball of preloaded images
	I0919 22:40:21.851746  102947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:40:21.851778  102947 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:40:21.851789  102947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:40:21.851912  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:21.876321  102947 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:40:21.876341  102947 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:40:21.876357  102947 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:40:21.876378  102947 start.go:360] acquireMachinesLock for ha-326307-m03: {Name:mk07818636650a6efffb19d787e84a34d6f1dd98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:40:21.876432  102947 start.go:364] duration metric: took 39.311µs to acquireMachinesLock for "ha-326307-m03"
	I0919 22:40:21.876450  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:40:21.876473  102947 fix.go:54] fixHost starting: m03
	I0919 22:40:21.876688  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:40:21.896238  102947 fix.go:112] recreateIfNeeded on ha-326307-m03: state=Stopped err=<nil>
	W0919 22:40:21.896264  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:40:21.898402  102947 out.go:252] * Restarting existing docker container for "ha-326307-m03" ...
	I0919 22:40:21.898493  102947 cli_runner.go:164] Run: docker start ha-326307-m03
	I0919 22:40:22.169027  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:40:22.190097  102947 kic.go:430] container "ha-326307-m03" state is running.
	I0919 22:40:22.190500  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:40:22.212272  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:22.212572  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:40:22.212637  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:22.233877  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:22.234093  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32829 <nil> <nil>}
	I0919 22:40:22.234104  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:40:22.234859  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37302->127.0.0.1:32829: read: connection reset by peer
	I0919 22:40:25.378797  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:40:25.378831  102947 ubuntu.go:182] provisioning hostname "ha-326307-m03"
	I0919 22:40:25.378898  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:25.414501  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:25.414938  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32829 <nil> <nil>}
	I0919 22:40:25.415073  102947 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m03 && echo "ha-326307-m03" | sudo tee /etc/hostname
	I0919 22:40:25.588850  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:40:25.588948  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:25.610247  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:25.610522  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32829 <nil> <nil>}
	I0919 22:40:25.610550  102947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:40:25.754732  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:40:25.754765  102947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:40:25.754794  102947 ubuntu.go:190] setting up certificates
	I0919 22:40:25.754806  102947 provision.go:84] configureAuth start
	I0919 22:40:25.754866  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:40:25.775758  102947 provision.go:143] copyHostCerts
	I0919 22:40:25.775814  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:25.775859  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:40:25.775876  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:25.775969  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:40:25.776130  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:25.776178  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:40:25.776185  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:25.776236  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:40:25.776312  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:25.776338  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:40:25.776347  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:25.776387  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:40:25.776465  102947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m03 san=[127.0.0.1 192.168.49.4 ha-326307-m03 localhost minikube]
	I0919 22:40:25.957556  102947 provision.go:177] copyRemoteCerts
	I0919 22:40:25.957614  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:40:25.957661  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:25.977125  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.075851  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:40:26.075925  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:40:26.103453  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:40:26.103525  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:40:26.130922  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:40:26.130993  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:40:26.158446  102947 provision.go:87] duration metric: took 403.627341ms to configureAuth
	I0919 22:40:26.158474  102947 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:40:26.158684  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:26.158696  102947 machine.go:96] duration metric: took 3.94610996s to provisionDockerMachine
	I0919 22:40:26.158706  102947 start.go:293] postStartSetup for "ha-326307-m03" (driver="docker")
	I0919 22:40:26.158718  102947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:40:26.158769  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:40:26.158815  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:26.177219  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.277051  102947 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:40:26.280902  102947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:40:26.280935  102947 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:40:26.280943  102947 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:40:26.280949  102947 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:40:26.280960  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:40:26.281017  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:40:26.281085  102947 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:40:26.281094  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:40:26.281219  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:40:26.291493  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:26.319669  102947 start.go:296] duration metric: took 160.947592ms for postStartSetup
	I0919 22:40:26.319764  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:40:26.319819  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:26.340008  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.438911  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:40:26.444573  102947 fix.go:56] duration metric: took 4.568092826s for fixHost
	I0919 22:40:26.444606  102947 start.go:83] releasing machines lock for "ha-326307-m03", held for 4.568161658s
	I0919 22:40:26.444685  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:40:26.470387  102947 out.go:179] * Found network options:
	I0919 22:40:26.472070  102947 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:40:26.473856  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:40:26.473888  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:40:26.473917  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:40:26.473931  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:40:26.474012  102947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:40:26.474058  102947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:40:26.474062  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:26.474114  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:26.500808  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.503237  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.708883  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:40:26.738637  102947 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:40:26.738718  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:40:26.752845  102947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:40:26.752872  102947 start.go:495] detecting cgroup driver to use...
	I0919 22:40:26.752907  102947 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:40:26.752955  102947 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:40:26.771737  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:40:26.788372  102947 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:40:26.788434  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:40:26.810086  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:40:26.828338  102947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:40:26.983767  102947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:40:27.150072  102947 docker.go:234] disabling docker service ...
	I0919 22:40:27.150147  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:40:27.173008  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:40:27.193344  102947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:40:27.317738  102947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:40:27.460983  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:40:27.485592  102947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:40:27.507890  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:40:27.520044  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:40:27.534512  102947 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:40:27.534574  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:40:27.548984  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:27.562483  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:40:27.577519  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:27.592117  102947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:40:27.604075  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:40:27.616958  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:40:27.631964  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:40:27.646292  102947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:40:27.658210  102947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:40:27.672336  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:27.803893  102947 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:40:28.062245  102947 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:40:28.062313  102947 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:40:28.066699  102947 start.go:563] Will wait 60s for crictl version
	I0919 22:40:28.066771  102947 ssh_runner.go:195] Run: which crictl
	I0919 22:40:28.071489  102947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:40:28.109371  102947 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:40:28.109444  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:28.135369  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:28.166192  102947 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:40:28.167830  102947 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:40:28.169229  102947 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:40:28.170416  102947 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:40:28.189509  102947 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:40:28.193804  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:28.206515  102947 mustload.go:65] Loading cluster: ha-326307
	I0919 22:40:28.206800  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:28.207069  102947 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:40:28.226787  102947 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:40:28.227094  102947 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.4
	I0919 22:40:28.227201  102947 certs.go:194] generating shared ca certs ...
	I0919 22:40:28.227247  102947 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:28.227424  102947 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:40:28.227487  102947 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:40:28.227504  102947 certs.go:256] generating profile certs ...
	I0919 22:40:28.227586  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:40:28.227634  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604
	I0919 22:40:28.227713  102947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:40:28.227730  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:40:28.227749  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:40:28.227764  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:40:28.227783  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:40:28.227800  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:40:28.227819  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:40:28.227839  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:40:28.227862  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:40:28.227929  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:40:28.227971  102947 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:40:28.227984  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:40:28.228019  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:40:28.228051  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:40:28.228082  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:40:28.228166  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:28.228213  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:40:28.228239  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:40:28.228259  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:28.228383  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:28.247785  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:28.336571  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:40:28.341071  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:40:28.354226  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:40:28.358563  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:40:28.373723  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:40:28.378406  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:40:28.394406  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:40:28.399415  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:40:28.416091  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:40:28.420161  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:40:28.435710  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:40:28.439831  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:40:28.454973  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:40:28.488291  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:40:28.520386  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:40:28.548878  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:40:28.577674  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:40:28.606894  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:40:28.635467  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:40:28.664035  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:40:28.692528  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:40:28.721969  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:40:28.750129  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:40:28.777226  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:40:28.798416  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:40:28.818429  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:40:28.844040  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:40:28.875418  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:40:28.898298  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:40:28.918961  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:40:28.940259  102947 ssh_runner.go:195] Run: openssl version
	I0919 22:40:28.946752  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:40:28.959425  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:28.964456  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:28.964528  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:28.973714  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:40:28.984876  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:40:28.996258  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:40:29.000541  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:40:29.000605  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:40:29.008599  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:40:29.018788  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:40:29.030314  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:40:29.034634  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:40:29.034700  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:40:29.042685  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:40:29.052467  102947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:40:29.056255  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:40:29.063105  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:40:29.071819  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:40:29.079410  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:40:29.086705  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:40:29.094001  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:40:29.101257  102947 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0919 22:40:29.101378  102947 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:40:29.101410  102947 kube-vip.go:115] generating kube-vip config ...
	I0919 22:40:29.101456  102947 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:40:29.115062  102947 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:40:29.115120  102947 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:40:29.115184  102947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:40:29.124866  102947 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:40:29.124920  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:40:29.135111  102947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:40:29.156313  102947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:40:29.177045  102947 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:40:29.198544  102947 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:40:29.203037  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:29.216695  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:29.333585  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:29.349312  102947 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:40:29.349626  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:29.352738  102947 out.go:179] * Verifying Kubernetes components...
	I0919 22:40:29.354445  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:29.474185  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:29.488500  102947 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:40:29.488573  102947 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:40:29.488783  102947 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m03" to be "Ready" ...
	I0919 22:40:29.492092  102947 node_ready.go:49] node "ha-326307-m03" is "Ready"
	I0919 22:40:29.492121  102947 node_ready.go:38] duration metric: took 3.321791ms for node "ha-326307-m03" to be "Ready" ...
	I0919 22:40:29.492134  102947 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:40:29.492205  102947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:40:29.506850  102947 api_server.go:72] duration metric: took 157.484065ms to wait for apiserver process to appear ...
	I0919 22:40:29.506886  102947 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:40:29.506910  102947 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:40:29.511130  102947 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:40:29.512015  102947 api_server.go:141] control plane version: v1.34.0
	I0919 22:40:29.512036  102947 api_server.go:131] duration metric: took 5.141712ms to wait for apiserver health ...
	I0919 22:40:29.512043  102947 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:40:29.518744  102947 system_pods.go:59] 24 kube-system pods found
	I0919 22:40:29.518774  102947 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:29.518782  102947 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:29.518787  102947 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:40:29.518791  102947 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:40:29.518796  102947 system_pods.go:61] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:29.518800  102947 system_pods.go:61] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:40:29.518804  102947 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:40:29.518807  102947 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:40:29.518810  102947 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:40:29.518813  102947 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:40:29.518819  102947 system_pods.go:61] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:29.518822  102947 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:40:29.518828  102947 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:29.518858  102947 system_pods.go:61] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:29.518862  102947 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:40:29.518868  102947 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:40:29.518873  102947 system_pods.go:61] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:40:29.518879  102947 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:40:29.518884  102947 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:29.518888  102947 system_pods.go:61] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:29.518894  102947 system_pods.go:61] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:40:29.518897  102947 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:40:29.518900  102947 system_pods.go:61] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:40:29.518905  102947 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:40:29.518910  102947 system_pods.go:74] duration metric: took 6.861836ms to wait for pod list to return data ...
	I0919 22:40:29.518919  102947 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:40:29.521697  102947 default_sa.go:45] found service account: "default"
	I0919 22:40:29.521719  102947 default_sa.go:55] duration metric: took 2.795273ms for default service account to be created ...
	I0919 22:40:29.521728  102947 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:40:29.527102  102947 system_pods.go:86] 24 kube-system pods found
	I0919 22:40:29.527136  102947 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:29.527144  102947 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:29.527166  102947 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:40:29.527174  102947 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:40:29.527181  102947 system_pods.go:89] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:29.527186  102947 system_pods.go:89] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:40:29.527195  102947 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:40:29.527200  102947 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:40:29.527209  102947 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:40:29.527214  102947 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:40:29.527224  102947 system_pods.go:89] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:29.527233  102947 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:40:29.527244  102947 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:29.527251  102947 system_pods.go:89] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:29.527259  102947 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:40:29.527265  102947 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:40:29.527274  102947 system_pods.go:89] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:40:29.527282  102947 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:40:29.527293  102947 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:29.527304  102947 system_pods.go:89] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:29.527311  102947 system_pods.go:89] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:40:29.527318  102947 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:40:29.527326  102947 system_pods.go:89] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:40:29.527331  102947 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:40:29.527342  102947 system_pods.go:126] duration metric: took 5.60777ms to wait for k8s-apps to be running ...
	I0919 22:40:29.527353  102947 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:40:29.527418  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:40:29.540084  102947 system_svc.go:56] duration metric: took 12.720236ms WaitForService to wait for kubelet
	I0919 22:40:29.540114  102947 kubeadm.go:578] duration metric: took 190.753677ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:40:29.540138  102947 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:40:29.543938  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:29.543961  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:29.543977  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:29.543981  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:29.543985  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:29.543988  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:29.543992  102947 node_conditions.go:105] duration metric: took 3.848698ms to run NodePressure ...
	I0919 22:40:29.544002  102947 start.go:241] waiting for startup goroutines ...
	I0919 22:40:29.544021  102947 start.go:255] writing updated cluster config ...
	I0919 22:40:29.546124  102947 out.go:203] 
	I0919 22:40:29.547729  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:29.547827  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:29.549464  102947 out.go:179] * Starting "ha-326307-m04" worker node in "ha-326307" cluster
	I0919 22:40:29.551423  102947 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:40:29.552959  102947 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:40:29.554347  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:29.554374  102947 cache.go:58] Caching tarball of preloaded images
	I0919 22:40:29.554466  102947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:40:29.554528  102947 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:40:29.554544  102947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:40:29.554661  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:29.576604  102947 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:40:29.576623  102947 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:40:29.576636  102947 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:40:29.576658  102947 start.go:360] acquireMachinesLock for ha-326307-m04: {Name:mk65e4f546dae46b7c7a6cfe6f590e09a0a01676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:40:29.576722  102947 start.go:364] duration metric: took 36.867µs to acquireMachinesLock for "ha-326307-m04"
	I0919 22:40:29.576740  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:40:29.576747  102947 fix.go:54] fixHost starting: m04
	I0919 22:40:29.576991  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:40:29.599524  102947 fix.go:112] recreateIfNeeded on ha-326307-m04: state=Stopped err=<nil>
	W0919 22:40:29.599554  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:40:29.601341  102947 out.go:252] * Restarting existing docker container for "ha-326307-m04" ...
	I0919 22:40:29.601436  102947 cli_runner.go:164] Run: docker start ha-326307-m04
	I0919 22:40:29.856928  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:40:29.877141  102947 kic.go:430] container "ha-326307-m04" state is running.
	I0919 22:40:29.877564  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m04
	I0919 22:40:29.898099  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:29.898353  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:40:29.898408  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	I0919 22:40:29.919242  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:29.919493  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32834 <nil> <nil>}
	I0919 22:40:29.919509  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:40:29.920238  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53392->127.0.0.1:32834: read: connection reset by peer
	I0919 22:40:32.921592  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:35.923978  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:38.925460  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:41.925968  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:44.927435  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:47.928879  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:50.930439  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:53.931750  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:56.932223  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:59.933541  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:02.934449  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:05.936468  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:08.938720  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:11.939132  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:14.940311  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:17.941338  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:20.943720  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:23.944321  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:26.945127  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:29.946482  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:32.947311  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:35.949504  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:38.950829  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:41.951282  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:44.951718  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:47.952886  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:50.954501  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:53.955026  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:56.955566  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:59.956458  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:02.958263  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:05.960452  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:08.960827  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:11.961991  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:14.963364  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:17.964467  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:20.966794  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:23.967257  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:26.968419  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:29.969450  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:32.970449  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:35.972383  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:38.974402  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:41.974947  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:44.975961  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:47.977119  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:50.979045  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:53.979535  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:56.980106  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:59.981632  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:02.983145  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:05.985114  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:08.987742  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:11.988246  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:14.988636  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:17.990247  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:20.990690  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:23.991025  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:26.992363  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:29.994267  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:43:29.994298  102947 ubuntu.go:182] provisioning hostname "ha-326307-m04"
	I0919 22:43:29.994384  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:30.014799  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:30.014894  102947 machine.go:96] duration metric: took 3m0.116525554s to provisionDockerMachine
	I0919 22:43:30.014980  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:43:30.015024  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:30.033859  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:30.033976  102947 retry.go:31] will retry after 180.600333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:30.215391  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:30.234687  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:30.234800  102947 retry.go:31] will retry after 396.872897ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:30.632462  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:30.651421  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:30.651553  102947 retry.go:31] will retry after 330.021621ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:30.982141  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:31.001874  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:31.001981  102947 retry.go:31] will retry after 902.78257ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:31.905550  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:31.924562  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:43:31.924688  102947 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:31.924702  102947 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:31.924747  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:43:31.924776  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:31.944532  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:31.944644  102947 retry.go:31] will retry after 370.439297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:32.316311  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:32.335705  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:32.335801  102947 retry.go:31] will retry after 471.735503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:32.808402  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:32.828725  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:32.828845  102947 retry.go:31] will retry after 653.918581ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:33.483771  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:33.505126  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:43:33.505274  102947 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:33.505310  102947 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:33.505321  102947 fix.go:56] duration metric: took 3m3.928573811s for fixHost
	I0919 22:43:33.505333  102947 start.go:83] releasing machines lock for "ha-326307-m04", held for 3m3.928601896s
	W0919 22:43:33.505353  102947 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:33.505432  102947 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:33.505457  102947 start.go:729] Will try again in 5 seconds ...
	I0919 22:43:38.507265  102947 start.go:360] acquireMachinesLock for ha-326307-m04: {Name:mk65e4f546dae46b7c7a6cfe6f590e09a0a01676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:43:38.507371  102947 start.go:364] duration metric: took 72.258µs to acquireMachinesLock for "ha-326307-m04"
	I0919 22:43:38.507394  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:43:38.507402  102947 fix.go:54] fixHost starting: m04
	I0919 22:43:38.507660  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:43:38.526017  102947 fix.go:112] recreateIfNeeded on ha-326307-m04: state=Stopped err=<nil>
	W0919 22:43:38.526047  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:43:38.528104  102947 out.go:252] * Restarting existing docker container for "ha-326307-m04" ...
	I0919 22:43:38.528195  102947 cli_runner.go:164] Run: docker start ha-326307-m04
	I0919 22:43:38.792918  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:43:38.812750  102947 kic.go:430] container "ha-326307-m04" state is running.
	I0919 22:43:38.813122  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m04
	I0919 22:43:38.835015  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:43:38.835331  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:43:38.835404  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	I0919 22:43:38.855863  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:38.856092  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32839 <nil> <nil>}
	I0919 22:43:38.856104  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:43:38.856765  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33486->127.0.0.1:32839: read: connection reset by peer
	I0919 22:43:41.857087  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:44.857460  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:47.858230  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:50.860407  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:53.860840  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:56.862141  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:59.863585  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:02.864745  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:05.867376  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:08.869862  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:11.870894  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:14.871487  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:17.872736  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:20.874506  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:23.875596  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:26.875979  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:29.877435  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:32.878977  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:35.881595  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:38.883657  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:41.884099  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:44.885281  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:47.887113  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:50.889449  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:53.889898  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:56.891131  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:59.893426  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:02.895108  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:05.896902  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:08.899087  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:11.900184  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:14.901096  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:17.902201  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:20.904503  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:23.904962  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:26.906198  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:29.908575  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:32.910119  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:35.912526  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:38.914521  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:41.915090  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:44.916505  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:47.917924  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:50.919469  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:53.919814  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:56.920315  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:59.922657  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:02.924190  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:05.926504  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:08.928432  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:11.929228  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:14.930499  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:17.931536  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:20.934030  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:23.934965  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:26.936258  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:29.938459  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:32.939438  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:35.941457  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:38.943814  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:46:38.943857  102947 ubuntu.go:182] provisioning hostname "ha-326307-m04"
	I0919 22:46:38.943941  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:38.964275  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:38.964337  102947 machine.go:96] duration metric: took 3m0.128991371s to provisionDockerMachine
	I0919 22:46:38.964416  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:46:38.964451  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:38.983816  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:38.983960  102947 retry.go:31] will retry after 364.420464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:39.349386  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:39.369081  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:39.369225  102947 retry.go:31] will retry after 206.788026ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:39.576720  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:39.596502  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:39.596609  102947 retry.go:31] will retry after 511.892744ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:40.109367  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:40.129534  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:40.129648  102947 retry.go:31] will retry after 811.778179ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:40.941718  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:40.962501  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:46:40.962610  102947 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:46:40.962628  102947 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:40.962672  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:46:40.962701  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:40.983319  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:40.983479  102947 retry.go:31] will retry after 310.783714ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:41.295059  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:41.314519  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:41.314654  102947 retry.go:31] will retry after 532.410728ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:41.847306  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:41.866776  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:41.866902  102947 retry.go:31] will retry after 498.480272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:42.366422  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:42.388450  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:46:42.388595  102947 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:46:42.388613  102947 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:42.388623  102947 fix.go:56] duration metric: took 3m3.881222347s for fixHost
	I0919 22:46:42.388631  102947 start.go:83] releasing machines lock for "ha-326307-m04", held for 3m3.881250201s
	W0919 22:46:42.388708  102947 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-326307" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	* Failed to start docker container. Running "minikube delete -p ha-326307" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:42.391386  102947 out.go:203] 
	W0919 22:46:42.393146  102947 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:46:42.393190  102947 out.go:285] * 
	* 
	W0919 22:46:42.395039  102947 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 22:46:42.396646  102947 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-326307 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 node list --alsologtostderr -v 5
ha_test.go:481: reported node list is not the same after restart. Before restart: ha-326307	192.168.49.2
ha-326307-m02	192.168.49.3
ha-326307-m03	192.168.49.4
ha-326307-m04	

                                                
                                                
After restart: ha-326307	192.168.49.2
ha-326307-m02	192.168.49.3
ha-326307-m03	192.168.49.4
ha-326307-m04	192.168.49.5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-326307
helpers_test.go:243: (dbg) docker inspect ha-326307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	        "Created": "2025-09-19T22:23:18.619000062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 103141,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:40:06.624789529Z",
	            "FinishedAt": "2025-09-19T22:40:05.96037119Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hosts",
	        "LogPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3-json.log",
	        "Name": "/ha-326307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-326307:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-326307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	                "LowerDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-326307",
	                "Source": "/var/lib/docker/volumes/ha-326307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-326307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-326307",
	                "name.minikube.sigs.k8s.io": "ha-326307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "06e56c61a506ab53aec79a320b27a6a2cf564500e22874ecad29c9521c3f21e9",
	            "SandboxKey": "/var/run/docker/netns/06e56c61a506",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32819"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32820"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32823"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32821"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32822"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-326307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:8a:0a:e2:38:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "465af21e2d8d112a34f14f7c3ab89eac4e6e57f582ff8e32b514381f55dd085e",
	                    "EndpointID": "bf734c63b8ebe83bbbed163afe56c19f4973081d194aed0cefd76108129a5748",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-326307",
	                        "5e0f1fe86b08"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-326307 -n ha-326307
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-326307 logs -n 25: (1.781237083s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307-m02:/home/docker/cp-test_ha-326307-m03_ha-326307-m02.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m02 sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m02.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ cp      │ ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307-m04:/home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp testdata/cp-test.txt ha-326307-m04:/home/docker/cp-test.txt                                                            │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile865153148/001/cp-test_ha-326307-m04.txt │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307:/home/docker/cp-test_ha-326307-m04_ha-326307.txt                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307.txt                                                │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m02:/home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m02 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m03:/home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ node    │ ha-326307 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ node    │ ha-326307 node start m02 --alsologtostderr -v 5                                                                                     │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ node    │ ha-326307 node list --alsologtostderr -v 5                                                                                          │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:39 UTC │                     │
	│ stop    │ ha-326307 stop --alsologtostderr -v 5                                                                                               │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:39 UTC │ 19 Sep 25 22:40 UTC │
	│ start   │ ha-326307 start --wait true --alsologtostderr -v 5                                                                                  │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:40 UTC │                     │
	│ node    │ ha-326307 node list --alsologtostderr -v 5                                                                                          │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:40:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:40:06.378966  102947 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:40:06.379330  102947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:40:06.379341  102947 out.go:374] Setting ErrFile to fd 2...
	I0919 22:40:06.379345  102947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:40:06.379571  102947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:40:06.380057  102947 out.go:368] Setting JSON to false
	I0919 22:40:06.381142  102947 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4950,"bootTime":1758316656,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:40:06.381289  102947 start.go:140] virtualization: kvm guest
	I0919 22:40:06.383708  102947 out.go:179] * [ha-326307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:40:06.385240  102947 notify.go:220] Checking for updates...
	I0919 22:40:06.385299  102947 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:40:06.386659  102947 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:40:06.388002  102947 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:40:06.389281  102947 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 22:40:06.390761  102947 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:40:06.392296  102947 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:40:06.394377  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:06.394567  102947 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:40:06.419564  102947 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:40:06.419671  102947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:40:06.482479  102947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:40:06.471430741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:40:06.482585  102947 docker.go:318] overlay module found
	I0919 22:40:06.484475  102947 out.go:179] * Using the docker driver based on existing profile
	I0919 22:40:06.485822  102947 start.go:304] selected driver: docker
	I0919 22:40:06.485843  102947 start.go:918] validating driver "docker" against &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:40:06.485989  102947 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:40:06.486131  102947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:40:06.542030  102947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:40:06.531788772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:40:06.542709  102947 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:40:06.542747  102947 cni.go:84] Creating CNI manager for ""
	I0919 22:40:06.542808  102947 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:40:06.542862  102947 start.go:348] cluster config:
	{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:40:06.544976  102947 out.go:179] * Starting "ha-326307" primary control-plane node in "ha-326307" cluster
	I0919 22:40:06.546636  102947 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:40:06.548781  102947 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:40:06.550349  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:06.550411  102947 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 22:40:06.550421  102947 cache.go:58] Caching tarball of preloaded images
	I0919 22:40:06.550484  102947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:40:06.550539  102947 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:40:06.550548  102947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:40:06.550672  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:06.573025  102947 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:40:06.573049  102947 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:40:06.573066  102947 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:40:06.573093  102947 start.go:360] acquireMachinesLock for ha-326307: {Name:mk42b79b90944aab63c8b37c2f94e04ca1ebec1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:40:06.573185  102947 start.go:364] duration metric: took 59.872µs to acquireMachinesLock for "ha-326307"
	I0919 22:40:06.573210  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:40:06.573217  102947 fix.go:54] fixHost starting: 
	I0919 22:40:06.573525  102947 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:40:06.592648  102947 fix.go:112] recreateIfNeeded on ha-326307: state=Stopped err=<nil>
	W0919 22:40:06.592678  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:40:06.594861  102947 out.go:252] * Restarting existing docker container for "ha-326307" ...
	I0919 22:40:06.594935  102947 cli_runner.go:164] Run: docker start ha-326307
	I0919 22:40:06.849585  102947 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:40:06.870075  102947 kic.go:430] container "ha-326307" state is running.
	I0919 22:40:06.870543  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:40:06.891652  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:06.891897  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:40:06.891960  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:06.913541  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:06.913830  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32819 <nil> <nil>}
	I0919 22:40:06.913845  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:40:06.914579  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60650->127.0.0.1:32819: read: connection reset by peer
	I0919 22:40:10.057342  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:40:10.057370  102947 ubuntu.go:182] provisioning hostname "ha-326307"
	I0919 22:40:10.057448  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:10.076664  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:10.076914  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32819 <nil> <nil>}
	I0919 22:40:10.076932  102947 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307 && echo "ha-326307" | sudo tee /etc/hostname
	I0919 22:40:10.228297  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:40:10.228362  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:10.247319  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:10.247573  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32819 <nil> <nil>}
	I0919 22:40:10.247594  102947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:40:10.386261  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:40:10.386297  102947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:40:10.386346  102947 ubuntu.go:190] setting up certificates
	I0919 22:40:10.386360  102947 provision.go:84] configureAuth start
	I0919 22:40:10.386416  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:40:10.407761  102947 provision.go:143] copyHostCerts
	I0919 22:40:10.407810  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:10.407855  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:40:10.407875  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:10.407957  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:40:10.408069  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:10.408095  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:40:10.408103  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:10.408148  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:40:10.408242  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:10.408268  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:40:10.408278  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:10.408327  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:40:10.408399  102947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307 san=[127.0.0.1 192.168.49.2 ha-326307 localhost minikube]
	I0919 22:40:10.713645  102947 provision.go:177] copyRemoteCerts
	I0919 22:40:10.713742  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:40:10.713785  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:10.733589  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:10.833003  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:40:10.833079  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:40:10.860656  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:40:10.860740  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 22:40:10.888926  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:40:10.889032  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:40:10.916393  102947 provision.go:87] duration metric: took 530.019982ms to configureAuth
	I0919 22:40:10.916415  102947 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:40:10.916623  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:10.916638  102947 machine.go:96] duration metric: took 4.024727048s to provisionDockerMachine
	I0919 22:40:10.916646  102947 start.go:293] postStartSetup for "ha-326307" (driver="docker")
	I0919 22:40:10.916656  102947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:40:10.916705  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:40:10.916774  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:10.935896  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:11.036597  102947 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:40:11.040388  102947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:40:11.040431  102947 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:40:11.040440  102947 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:40:11.040446  102947 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:40:11.040457  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:40:11.040518  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:40:11.040597  102947 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:40:11.040608  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:40:11.040710  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:40:11.050512  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:11.077986  102947 start.go:296] duration metric: took 161.32783ms for postStartSetup
	I0919 22:40:11.078088  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:40:11.078139  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:11.099514  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:11.193605  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:40:11.198421  102947 fix.go:56] duration metric: took 4.625199971s for fixHost
	I0919 22:40:11.198447  102947 start.go:83] releasing machines lock for "ha-326307", held for 4.625246732s
	I0919 22:40:11.198524  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:40:11.217572  102947 ssh_runner.go:195] Run: cat /version.json
	I0919 22:40:11.217596  102947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:40:11.217615  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:11.217666  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:11.238048  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:11.238195  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:11.415017  102947 ssh_runner.go:195] Run: systemctl --version
	I0919 22:40:11.420537  102947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:40:11.425907  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:40:11.447016  102947 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:40:11.447107  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:40:11.457668  102947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:40:11.457703  102947 start.go:495] detecting cgroup driver to use...
	I0919 22:40:11.457740  102947 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:40:11.457803  102947 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:40:11.473712  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:40:11.486915  102947 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:40:11.486970  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:40:11.501818  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:40:11.514985  102947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:40:11.582004  102947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:40:11.651320  102947 docker.go:234] disabling docker service ...
	I0919 22:40:11.651379  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:40:11.665822  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:40:11.678416  102947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:40:11.746878  102947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:40:11.815384  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:40:11.828348  102947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:40:11.847640  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:40:11.859649  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:40:11.871696  102947 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:40:11.871768  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:40:11.883197  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:11.894832  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:40:11.906582  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:11.918458  102947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:40:11.929108  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:40:11.940521  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:40:11.952577  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:40:11.963963  102947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:40:11.974367  102947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:40:11.985259  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:12.050391  102947 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:40:12.169871  102947 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:40:12.169947  102947 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:40:12.174079  102947 start.go:563] Will wait 60s for crictl version
	I0919 22:40:12.174139  102947 ssh_runner.go:195] Run: which crictl
	I0919 22:40:12.177946  102947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:40:12.213111  102947 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:40:12.213183  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:12.237742  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:12.267221  102947 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:40:12.268667  102947 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:40:12.287123  102947 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:40:12.291375  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:12.304417  102947 kubeadm.go:875] updating cluster {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:40:12.304576  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:12.304623  102947 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:40:12.341103  102947 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:40:12.341184  102947 containerd.go:534] Images already preloaded, skipping extraction
	I0919 22:40:12.341271  102947 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:40:12.378884  102947 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:40:12.378907  102947 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:40:12.378916  102947 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0919 22:40:12.379030  102947 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:40:12.379093  102947 ssh_runner.go:195] Run: sudo crictl info
	I0919 22:40:12.415076  102947 cni.go:84] Creating CNI manager for ""
	I0919 22:40:12.415100  102947 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:40:12.415111  102947 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:40:12.415129  102947 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-326307 NodeName:ha-326307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:40:12.415290  102947 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-326307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:40:12.415312  102947 kube-vip.go:115] generating kube-vip config ...
	I0919 22:40:12.415360  102947 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:40:12.428658  102947 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:40:12.428770  102947 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:40:12.428823  102947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:40:12.438647  102947 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:40:12.438722  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:40:12.448707  102947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 22:40:12.468517  102947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:40:12.488929  102947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0919 22:40:12.510232  102947 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:40:12.530559  102947 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:40:12.534624  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:12.548237  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:12.611595  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:12.634054  102947 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.2
	I0919 22:40:12.634076  102947 certs.go:194] generating shared ca certs ...
	I0919 22:40:12.634091  102947 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:12.634256  102947 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:40:12.634323  102947 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:40:12.634335  102947 certs.go:256] generating profile certs ...
	I0919 22:40:12.634435  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:40:12.634462  102947 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.12c02704
	I0919 22:40:12.634473  102947 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.12c02704 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:40:12.848520  102947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.12c02704 ...
	I0919 22:40:12.848550  102947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.12c02704: {Name:mkec91c90022534b703be5f6d2ae62638fdba9da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:12.848737  102947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.12c02704 ...
	I0919 22:40:12.848755  102947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.12c02704: {Name:mka1bfb464462bf578809e209441ee38ad333adf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:12.848871  102947 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.12c02704 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:40:12.849067  102947 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.12c02704 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:40:12.849277  102947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:40:12.849295  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:40:12.849315  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:40:12.849337  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:40:12.849355  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:40:12.849373  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:40:12.849392  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:40:12.849410  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:40:12.849430  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:40:12.849610  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:40:12.849684  102947 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:40:12.849700  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:40:12.849733  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:40:12.849775  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:40:12.849812  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:40:12.849872  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:12.849915  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:40:12.849936  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:40:12.849955  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:12.850570  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:40:12.881412  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:40:12.909365  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:40:12.936570  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:40:12.963699  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:40:12.991460  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:40:13.019268  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:40:13.046670  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:40:13.074069  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:40:13.101424  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:40:13.128690  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:40:13.156653  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:40:13.179067  102947 ssh_runner.go:195] Run: openssl version
	I0919 22:40:13.187620  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:40:13.203083  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:40:13.209838  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:40:13.209911  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:40:13.220919  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:40:13.238903  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:40:13.253729  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:40:13.261626  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:40:13.261780  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:40:13.272880  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:40:13.287661  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:40:13.303848  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:13.308762  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:13.308833  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:13.319788  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:40:13.336323  102947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:40:13.343266  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:40:13.355799  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:40:13.367939  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:40:13.378087  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:40:13.388839  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:40:13.399528  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:40:13.412341  102947 kubeadm.go:392] StartCluster: {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:40:13.412499  102947 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 22:40:13.412584  102947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:40:13.476121  102947 cri.go:89] found id: "83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad"
	I0919 22:40:13.476178  102947 cri.go:89] found id: "63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284"
	I0919 22:40:13.476184  102947 cri.go:89] found id: "7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c"
	I0919 22:40:13.476189  102947 cri.go:89] found id: "c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6"
	I0919 22:40:13.476197  102947 cri.go:89] found id: "e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5"
	I0919 22:40:13.476204  102947 cri.go:89] found id: "d1c181558b58c3ebc7dae5df97b80119a8c6d18dc19b0532b61999c4db0c6668"
	I0919 22:40:13.476209  102947 cri.go:89] found id: "ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93"
	I0919 22:40:13.476214  102947 cri.go:89] found id: "1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6"
	I0919 22:40:13.476221  102947 cri.go:89] found id: "f52d2d9f5881b5d50f95a3aeef2c876d51dd6b2be6c464a7464b26e8175b0fc6"
	I0919 22:40:13.476232  102947 cri.go:89] found id: "365cc00c2e009eeed7e71d1202a4d406c12f0d9faee38762ba691eb2d7c71f89"
	I0919 22:40:13.476255  102947 cri.go:89] found id: "bd9e41958ffbbb27dab3d180a56fb27df36a1a1896db3a6f322a8aabcda57677"
	I0919 22:40:13.476262  102947 cri.go:89] found id: "456a0c3cbf5ce028b9cbac658728c1fee13ad8e2659bfa0c625cd685d711c708"
	I0919 22:40:13.476267  102947 cri.go:89] found id: "05ab0247624a7f8ffa6bc948e3abc3adc49911c297291eb0a6dd42e3df39f4cd"
	I0919 22:40:13.476272  102947 cri.go:89] found id: "e5c59a6abe97751de42afd27010936d1c3a401fad6cd730e75a1692a895b4fbc"
	I0919 22:40:13.476278  102947 cri.go:89] found id: "e80d65e3c7c18da87f7fa003e39382f5a4285ba4782fc295197421c6b882a161"
	I0919 22:40:13.476285  102947 cri.go:89] found id: ""
	I0919 22:40:13.476358  102947 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0919 22:40:13.511540  102947 cri.go:116] JSON = [{"ociVersion":"1.2.0","id":"35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92","pid":903,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92/rootfs","created":"2025-09-19T22:40:13.265497632Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-ha-326307_57c850ed4c5abebc96f109c9dc04f98c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-ha-3263
07","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"57c850ed4c5abebc96f109c9dc04f98c"},"owner":"root"},{"ociVersion":"1.2.0","id":"4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f","pid":851,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f/rootfs","created":"2025-09-19T22:40:13.237289545Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-ha-326307_f6c96a149704fe94a8f3f9671ba1a8ff","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f6c96a149704fe94a8f3f9671ba1a8ff"},"owner":"root"},{"ociVersion":"1.2.0","id":"63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284","pid":1109,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284/rootfs","created":"2025-09-19T22:40:13.452193435Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri.sandbox-id":"b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248","io.kubernetes.cri.sandbox-name":"kube-scheduler-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-s
ystem","io.kubernetes.cri.sandbox-uid":"02be84f36b44ed11e0db130395870414"},"owner":"root"},{"ociVersion":"1.2.0","id":"7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c","pid":1081,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c/rootfs","created":"2025-09-19T22:40:13.445726517Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri.sandbox-id":"35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92","io.kubernetes.cri.sandbox-name":"kube-controller-manager-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"57c850ed4c5abebc96f109c9dc04f98c"},"owner":"r
oot"},{"ociVersion":"1.2.0","id":"8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d","pid":926,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d/rootfs","created":"2025-09-19T22:40:13.291697374Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-vip-ha-326307_11fc7e0ddcb5f54efe3aa73e9d205abc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-vip-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-ui
d":"11fc7e0ddcb5f54efe3aa73e9d205abc"},"owner":"root"},{"ociVersion":"1.2.0","id":"83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad","pid":1117,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad/rootfs","created":"2025-09-19T22:40:13.459929825Z","annotations":{"io.kubernetes.cri.container-name":"kube-vip","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri.sandbox-id":"8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d","io.kubernetes.cri.sandbox-name":"kube-vip-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"11fc7e0ddcb5f54efe3aa73e9d205abc"},"owner":"root"},{"ociVersion":"1.2.0","id":"a85600718119d4e698fdb29d033016634be8e463e4c29a4
b1f9b6778b83c3910","pid":850,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910/rootfs","created":"2025-09-19T22:40:13.246511214Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-ha-326307_044bbdcbe96821df073716c7f05fb17d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"044bbdcbe96821df073716c7f05fb17d"},"owner":"root"},{"ociVersion":"1.2.0","id":"b84e
223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248","pid":911,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248/rootfs","created":"2025-09-19T22:40:13.280883406Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-ha-326307_02be84f36b44ed11e0db130395870414","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"02be84f36b44ed11e0db
130395870414"},"owner":"root"},{"ociVersion":"1.2.0","id":"c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6","pid":1090,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6/rootfs","created":"2025-09-19T22:40:13.443035858Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910","io.kubernetes.cri.sandbox-name":"etcd-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"044bbdcbe96821df073716c7f05fb17d"},"owner":"root"},{"ociVersion":"1.2.0","id":"e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5","pid":1007,"statu
s":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5/rootfs","created":"2025-09-19T22:40:13.41525993Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri.sandbox-id":"4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f","io.kubernetes.cri.sandbox-name":"kube-apiserver-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f6c96a149704fe94a8f3f9671ba1a8ff"},"owner":"root"}]
	I0919 22:40:13.511763  102947 cri.go:126] list returned 10 containers
	I0919 22:40:13.511789  102947 cri.go:129] container: {ID:35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92 Status:running}
	I0919 22:40:13.511829  102947 cri.go:131] skipping 35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92 - not in ps
	I0919 22:40:13.511840  102947 cri.go:129] container: {ID:4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f Status:running}
	I0919 22:40:13.511848  102947 cri.go:131] skipping 4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f - not in ps
	I0919 22:40:13.511854  102947 cri.go:129] container: {ID:63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284 Status:running}
	I0919 22:40:13.511864  102947 cri.go:135] skipping {63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284 running}: state = "running", want "paused"
	I0919 22:40:13.511877  102947 cri.go:129] container: {ID:7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c Status:running}
	I0919 22:40:13.511890  102947 cri.go:135] skipping {7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c running}: state = "running", want "paused"
	I0919 22:40:13.511898  102947 cri.go:129] container: {ID:8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d Status:running}
	I0919 22:40:13.511910  102947 cri.go:131] skipping 8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d - not in ps
	I0919 22:40:13.511916  102947 cri.go:129] container: {ID:83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad Status:running}
	I0919 22:40:13.511925  102947 cri.go:135] skipping {83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad running}: state = "running", want "paused"
	I0919 22:40:13.511935  102947 cri.go:129] container: {ID:a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910 Status:running}
	I0919 22:40:13.511941  102947 cri.go:131] skipping a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910 - not in ps
	I0919 22:40:13.511946  102947 cri.go:129] container: {ID:b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248 Status:running}
	I0919 22:40:13.511951  102947 cri.go:131] skipping b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248 - not in ps
	I0919 22:40:13.511957  102947 cri.go:129] container: {ID:c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6 Status:running}
	I0919 22:40:13.511969  102947 cri.go:135] skipping {c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6 running}: state = "running", want "paused"
	I0919 22:40:13.511976  102947 cri.go:129] container: {ID:e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5 Status:running}
	I0919 22:40:13.511988  102947 cri.go:135] skipping {e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5 running}: state = "running", want "paused"
	I0919 22:40:13.512041  102947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:40:13.524546  102947 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:40:13.524567  102947 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:40:13.524627  102947 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:40:13.537544  102947 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:40:13.538084  102947 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-326307" does not appear in /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:40:13.538273  102947 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14678/kubeconfig needs updating (will repair): [kubeconfig missing "ha-326307" cluster setting kubeconfig missing "ha-326307" context setting]
	I0919 22:40:13.538666  102947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:13.539452  102947 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:40:13.540084  102947 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:40:13.540104  102947 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:40:13.540111  102947 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:40:13.540118  102947 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:40:13.540125  102947 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:40:13.540609  102947 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:40:13.540743  102947 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:40:13.555466  102947 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:40:13.555575  102947 kubeadm.go:593] duration metric: took 31.000137ms to restartPrimaryControlPlane
	I0919 22:40:13.555603  102947 kubeadm.go:394] duration metric: took 143.274252ms to StartCluster
	I0919 22:40:13.555651  102947 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:13.555800  102947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:40:13.556731  102947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:13.557204  102947 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:40:13.557402  102947 start.go:241] waiting for startup goroutines ...
	I0919 22:40:13.557267  102947 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:40:13.557510  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:13.561726  102947 out.go:179] * Enabled addons: 
	I0919 22:40:13.563479  102947 addons.go:514] duration metric: took 6.21303ms for enable addons: enabled=[]
	I0919 22:40:13.563535  102947 start.go:246] waiting for cluster config update ...
	I0919 22:40:13.563548  102947 start.go:255] writing updated cluster config ...
	I0919 22:40:13.565943  102947 out.go:203] 
	I0919 22:40:13.568105  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:13.568246  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:13.570538  102947 out.go:179] * Starting "ha-326307-m02" control-plane node in "ha-326307" cluster
	I0919 22:40:13.572566  102947 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:40:13.574955  102947 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:40:13.576797  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:13.576835  102947 cache.go:58] Caching tarball of preloaded images
	I0919 22:40:13.576935  102947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:40:13.576982  102947 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:40:13.576999  102947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:40:13.577147  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:13.603282  102947 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:40:13.603304  102947 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:40:13.603323  102947 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:40:13.603356  102947 start.go:360] acquireMachinesLock for ha-326307-m02: {Name:mk4919a9b19250804b0f53d01bcd11efaf9a431f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:40:13.603419  102947 start.go:364] duration metric: took 47.152µs to acquireMachinesLock for "ha-326307-m02"
	I0919 22:40:13.603445  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:40:13.603459  102947 fix.go:54] fixHost starting: m02
	I0919 22:40:13.603697  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:40:13.626324  102947 fix.go:112] recreateIfNeeded on ha-326307-m02: state=Stopped err=<nil>
	W0919 22:40:13.626352  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:40:13.629640  102947 out.go:252] * Restarting existing docker container for "ha-326307-m02" ...
	I0919 22:40:13.629728  102947 cli_runner.go:164] Run: docker start ha-326307-m02
	I0919 22:40:13.926841  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:40:13.950131  102947 kic.go:430] container "ha-326307-m02" state is running.
	I0919 22:40:13.950515  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:40:13.973194  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:13.973503  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:40:13.973577  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:13.996029  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:13.996469  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32824 <nil> <nil>}
	I0919 22:40:13.996495  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:40:13.997409  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55282->127.0.0.1:32824: read: connection reset by peer
	I0919 22:40:17.135269  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:40:17.135298  102947 ubuntu.go:182] provisioning hostname "ha-326307-m02"
	I0919 22:40:17.135359  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:17.155772  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:17.156086  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32824 <nil> <nil>}
	I0919 22:40:17.156103  102947 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m02 && echo "ha-326307-m02" | sudo tee /etc/hostname
	I0919 22:40:17.308282  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:40:17.308354  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:17.329394  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:17.329602  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32824 <nil> <nil>}
	I0919 22:40:17.329620  102947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:40:17.469105  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:40:17.469136  102947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:40:17.469173  102947 ubuntu.go:190] setting up certificates
	I0919 22:40:17.469188  102947 provision.go:84] configureAuth start
	I0919 22:40:17.469243  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:40:17.489456  102947 provision.go:143] copyHostCerts
	I0919 22:40:17.489512  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:17.489551  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:40:17.489560  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:17.489629  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:40:17.489711  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:17.489728  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:40:17.489735  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:17.489771  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:40:17.489846  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:17.489864  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:40:17.489870  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:17.489896  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:40:17.489952  102947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m02 san=[127.0.0.1 192.168.49.3 ha-326307-m02 localhost minikube]
	I0919 22:40:17.687121  102947 provision.go:177] copyRemoteCerts
	I0919 22:40:17.687196  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:40:17.687230  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:17.706618  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:17.805482  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:40:17.805552  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:40:17.834469  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:40:17.834533  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:40:17.862491  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:40:17.862578  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:40:17.891048  102947 provision.go:87] duration metric: took 421.847088ms to configureAuth
	I0919 22:40:17.891077  102947 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:40:17.891323  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:17.891337  102947 machine.go:96] duration metric: took 3.917817402s to provisionDockerMachine
	I0919 22:40:17.891348  102947 start.go:293] postStartSetup for "ha-326307-m02" (driver="docker")
	I0919 22:40:17.891362  102947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:40:17.891426  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:40:17.891475  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:17.911877  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:18.017574  102947 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:40:18.021564  102947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:40:18.021608  102947 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:40:18.021620  102947 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:40:18.021627  102947 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:40:18.021641  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:40:18.021732  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:40:18.021827  102947 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:40:18.021845  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:40:18.021965  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:40:18.037625  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:18.072355  102947 start.go:296] duration metric: took 180.992211ms for postStartSetup
	I0919 22:40:18.072434  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:40:18.072488  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:18.097080  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:18.200976  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:40:18.207724  102947 fix.go:56] duration metric: took 4.604261714s for fixHost
	I0919 22:40:18.207752  102947 start.go:83] releasing machines lock for "ha-326307-m02", held for 4.604318809s
	I0919 22:40:18.207819  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:40:18.233683  102947 out.go:179] * Found network options:
	I0919 22:40:18.235326  102947 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:40:18.236979  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:40:18.237024  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:40:18.237101  102947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:40:18.237148  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:18.237186  102947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:40:18.237248  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:18.262883  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:18.265825  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:18.472261  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:40:18.501316  102947 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:40:18.501403  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:40:18.517881  102947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:40:18.517907  102947 start.go:495] detecting cgroup driver to use...
	I0919 22:40:18.517943  102947 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:40:18.518009  102947 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:40:18.540215  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:40:18.558468  102947 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:40:18.558538  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:40:18.578938  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:40:18.606098  102947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:40:18.738984  102947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:40:18.861135  102947 docker.go:234] disabling docker service ...
	I0919 22:40:18.861295  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:40:18.889797  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:40:18.903559  102947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:40:19.020834  102947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:40:19.210102  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:40:19.253298  102947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:40:19.294451  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:40:19.314809  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:40:19.329896  102947 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:40:19.329968  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:40:19.344499  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:19.359934  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:40:19.375426  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:19.390525  102947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:40:19.405742  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:40:19.419676  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:40:19.433744  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:40:19.447497  102947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:40:19.459701  102947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:40:19.472280  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:19.590393  102947 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:40:19.844194  102947 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:40:19.844268  102947 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:40:19.848691  102947 start.go:563] Will wait 60s for crictl version
	I0919 22:40:19.848750  102947 ssh_runner.go:195] Run: which crictl
	I0919 22:40:19.852912  102947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:40:19.896612  102947 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:40:19.896665  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:19.922108  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:19.951040  102947 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:40:19.952600  102947 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:40:19.954094  102947 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:40:19.972221  102947 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:40:19.976367  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:19.988586  102947 mustload.go:65] Loading cluster: ha-326307
	I0919 22:40:19.988826  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:19.989048  102947 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:40:20.009691  102947 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:40:20.009938  102947 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.3
	I0919 22:40:20.009958  102947 certs.go:194] generating shared ca certs ...
	I0919 22:40:20.009977  102947 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:20.010097  102947 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:40:20.010186  102947 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:40:20.010200  102947 certs.go:256] generating profile certs ...
	I0919 22:40:20.010274  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:40:20.010317  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.d9fee4c2
	I0919 22:40:20.010351  102947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:40:20.010361  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:40:20.010388  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:40:20.010403  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:40:20.010415  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:40:20.010427  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:40:20.010440  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:40:20.010451  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:40:20.010463  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:40:20.010507  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:40:20.010541  102947 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:40:20.010552  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:40:20.010572  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:40:20.010593  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:40:20.010613  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:40:20.010656  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:20.010681  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:40:20.010696  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:20.010706  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:40:20.010750  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:20.034999  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:20.130696  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:40:20.137701  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:40:20.181406  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:40:20.188123  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:40:20.209898  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:40:20.217560  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:40:20.265391  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:40:20.271849  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:40:20.306378  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:40:20.313419  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:40:20.338279  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:40:20.344910  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:40:20.368606  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:40:20.417189  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:40:20.473868  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:40:20.554542  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:40:20.629092  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:40:20.678888  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:40:20.722550  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:40:20.778639  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:40:20.828112  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:40:20.884904  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:40:20.936206  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:40:20.979746  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:40:21.011968  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:40:21.037922  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:40:21.058425  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:40:21.078533  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:40:21.099029  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:40:21.125522  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:40:21.151265  102947 ssh_runner.go:195] Run: openssl version
	I0919 22:40:21.157938  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:40:21.169944  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:40:21.174243  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:40:21.174339  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:40:21.182194  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:40:21.195623  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:40:21.210343  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:21.216012  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:21.216080  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:21.226359  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:40:21.239970  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:40:21.256305  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:40:21.263490  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:40:21.263550  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:40:21.274306  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:40:21.289549  102947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:40:21.294844  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:40:21.305190  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:40:21.317466  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:40:21.327473  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:40:21.337404  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:40:21.346840  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:40:21.355241  102947 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0919 22:40:21.355365  102947 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:40:21.355400  102947 kube-vip.go:115] generating kube-vip config ...
	I0919 22:40:21.355447  102947 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:40:21.372568  102947 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:40:21.372652  102947 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:40:21.372715  102947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:40:21.385812  102947 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:40:21.385902  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:40:21.396920  102947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:40:21.418422  102947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:40:21.441221  102947 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:40:21.461293  102947 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:40:21.465499  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:21.479394  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:21.609276  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:21.625324  102947 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:40:21.625678  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:21.627937  102947 out.go:179] * Verifying Kubernetes components...
	I0919 22:40:21.629432  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:21.754519  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:21.770966  102947 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:40:21.771034  102947 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:40:21.771308  102947 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m02" to be "Ready" ...
	I0919 22:40:21.780317  102947 node_ready.go:49] node "ha-326307-m02" is "Ready"
	I0919 22:40:21.780344  102947 node_ready.go:38] duration metric: took 9.008043ms for node "ha-326307-m02" to be "Ready" ...
	I0919 22:40:21.780357  102947 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:40:21.780412  102947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:40:21.794097  102947 api_server.go:72] duration metric: took 168.727042ms to wait for apiserver process to appear ...
	I0919 22:40:21.794124  102947 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:40:21.794147  102947 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:40:21.800333  102947 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:40:21.801474  102947 api_server.go:141] control plane version: v1.34.0
	I0919 22:40:21.801509  102947 api_server.go:131] duration metric: took 7.377354ms to wait for apiserver health ...
	I0919 22:40:21.801520  102947 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:40:21.810182  102947 system_pods.go:59] 24 kube-system pods found
	I0919 22:40:21.810226  102947 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:21.810244  102947 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:21.810254  102947 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:21.810262  102947 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:21.810268  102947 system_pods.go:61] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running
	I0919 22:40:21.810276  102947 system_pods.go:61] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:40:21.810281  102947 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:40:21.810292  102947 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:40:21.810300  102947 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:21.810311  102947 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:21.810315  102947 system_pods.go:61] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running
	I0919 22:40:21.810325  102947 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:21.810332  102947 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:21.810336  102947 system_pods.go:61] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running
	I0919 22:40:21.810340  102947 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:40:21.810344  102947 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:40:21.810348  102947 system_pods.go:61] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:40:21.810353  102947 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:21.810361  102947 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:21.810365  102947 system_pods.go:61] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running
	I0919 22:40:21.810369  102947 system_pods.go:61] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:40:21.810372  102947 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:40:21.810375  102947 system_pods.go:61] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:40:21.810378  102947 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:40:21.810383  102947 system_pods.go:74] duration metric: took 8.856915ms to wait for pod list to return data ...
	I0919 22:40:21.810390  102947 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:40:21.813818  102947 default_sa.go:45] found service account: "default"
	I0919 22:40:21.813853  102947 default_sa.go:55] duration metric: took 3.458375ms for default service account to be created ...
	I0919 22:40:21.813864  102947 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:40:21.820987  102947 system_pods.go:86] 24 kube-system pods found
	I0919 22:40:21.821019  102947 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:21.821027  102947 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:21.821034  102947 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:21.821040  102947 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:21.821044  102947 system_pods.go:89] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running
	I0919 22:40:21.821048  102947 system_pods.go:89] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:40:21.821051  102947 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:40:21.821054  102947 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:40:21.821059  102947 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:21.821064  102947 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:21.821068  102947 system_pods.go:89] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running
	I0919 22:40:21.821074  102947 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:21.821079  102947 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:21.821083  102947 system_pods.go:89] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running
	I0919 22:40:21.821087  102947 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:40:21.821090  102947 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:40:21.821095  102947 system_pods.go:89] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:40:21.821100  102947 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:21.821107  102947 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:21.821114  102947 system_pods.go:89] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running
	I0919 22:40:21.821118  102947 system_pods.go:89] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:40:21.821121  102947 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:40:21.821124  102947 system_pods.go:89] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:40:21.821127  102947 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:40:21.821133  102947 system_pods.go:126] duration metric: took 7.263023ms to wait for k8s-apps to be running ...
	I0919 22:40:21.821142  102947 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:40:21.821209  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:40:21.835069  102947 system_svc.go:56] duration metric: took 13.918083ms WaitForService to wait for kubelet
	I0919 22:40:21.835096  102947 kubeadm.go:578] duration metric: took 209.729975ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:40:21.835114  102947 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:40:21.839112  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:21.839140  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:21.839183  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:21.839191  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:21.839198  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:21.839203  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:21.839208  102947 node_conditions.go:105] duration metric: took 4.090003ms to run NodePressure ...
	I0919 22:40:21.839223  102947 start.go:241] waiting for startup goroutines ...
	I0919 22:40:21.839260  102947 start.go:255] writing updated cluster config ...
	I0919 22:40:21.841908  102947 out.go:203] 
	I0919 22:40:21.843889  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:21.844011  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:21.846125  102947 out.go:179] * Starting "ha-326307-m03" control-plane node in "ha-326307" cluster
	I0919 22:40:21.848304  102947 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:40:21.850127  102947 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:40:21.851602  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:21.851635  102947 cache.go:58] Caching tarball of preloaded images
	I0919 22:40:21.851746  102947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:40:21.851778  102947 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:40:21.851789  102947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:40:21.851912  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:21.876321  102947 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:40:21.876341  102947 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:40:21.876357  102947 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:40:21.876378  102947 start.go:360] acquireMachinesLock for ha-326307-m03: {Name:mk07818636650a6efffb19d787e84a34d6f1dd98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:40:21.876432  102947 start.go:364] duration metric: took 39.311µs to acquireMachinesLock for "ha-326307-m03"
	I0919 22:40:21.876450  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:40:21.876473  102947 fix.go:54] fixHost starting: m03
	I0919 22:40:21.876688  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:40:21.896238  102947 fix.go:112] recreateIfNeeded on ha-326307-m03: state=Stopped err=<nil>
	W0919 22:40:21.896264  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:40:21.898402  102947 out.go:252] * Restarting existing docker container for "ha-326307-m03" ...
	I0919 22:40:21.898493  102947 cli_runner.go:164] Run: docker start ha-326307-m03
	I0919 22:40:22.169027  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:40:22.190097  102947 kic.go:430] container "ha-326307-m03" state is running.
	I0919 22:40:22.190500  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:40:22.212272  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:22.212572  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:40:22.212637  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:22.233877  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:22.234093  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32829 <nil> <nil>}
	I0919 22:40:22.234104  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:40:22.234859  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37302->127.0.0.1:32829: read: connection reset by peer
	I0919 22:40:25.378797  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:40:25.378831  102947 ubuntu.go:182] provisioning hostname "ha-326307-m03"
	I0919 22:40:25.378898  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:25.414501  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:25.414938  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32829 <nil> <nil>}
	I0919 22:40:25.415073  102947 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m03 && echo "ha-326307-m03" | sudo tee /etc/hostname
	I0919 22:40:25.588850  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:40:25.588948  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:25.610247  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:25.610522  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32829 <nil> <nil>}
	I0919 22:40:25.610550  102947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:40:25.754732  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:40:25.754765  102947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:40:25.754794  102947 ubuntu.go:190] setting up certificates
	I0919 22:40:25.754806  102947 provision.go:84] configureAuth start
	I0919 22:40:25.754866  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:40:25.775758  102947 provision.go:143] copyHostCerts
	I0919 22:40:25.775814  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:25.775859  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:40:25.775876  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:25.775969  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:40:25.776130  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:25.776178  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:40:25.776185  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:25.776236  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:40:25.776312  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:25.776338  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:40:25.776347  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:25.776387  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:40:25.776465  102947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m03 san=[127.0.0.1 192.168.49.4 ha-326307-m03 localhost minikube]
	I0919 22:40:25.957556  102947 provision.go:177] copyRemoteCerts
	I0919 22:40:25.957614  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:40:25.957661  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:25.977125  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.075851  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:40:26.075925  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:40:26.103453  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:40:26.103525  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:40:26.130922  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:40:26.130993  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:40:26.158446  102947 provision.go:87] duration metric: took 403.627341ms to configureAuth
	I0919 22:40:26.158474  102947 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:40:26.158684  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:26.158696  102947 machine.go:96] duration metric: took 3.94610996s to provisionDockerMachine
	I0919 22:40:26.158706  102947 start.go:293] postStartSetup for "ha-326307-m03" (driver="docker")
	I0919 22:40:26.158718  102947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:40:26.158769  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:40:26.158815  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:26.177219  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.277051  102947 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:40:26.280902  102947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:40:26.280935  102947 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:40:26.280943  102947 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:40:26.280949  102947 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:40:26.280960  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:40:26.281017  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:40:26.281085  102947 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:40:26.281094  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:40:26.281219  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:40:26.291493  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:26.319669  102947 start.go:296] duration metric: took 160.947592ms for postStartSetup
	I0919 22:40:26.319764  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:40:26.319819  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:26.340008  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.438911  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:40:26.444573  102947 fix.go:56] duration metric: took 4.568092826s for fixHost
	I0919 22:40:26.444606  102947 start.go:83] releasing machines lock for "ha-326307-m03", held for 4.568161658s
	I0919 22:40:26.444685  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:40:26.470387  102947 out.go:179] * Found network options:
	I0919 22:40:26.472070  102947 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:40:26.473856  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:40:26.473888  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:40:26.473917  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:40:26.473931  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:40:26.474012  102947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:40:26.474058  102947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:40:26.474062  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:26.474114  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:26.500808  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.503237  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.708883  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:40:26.738637  102947 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:40:26.738718  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:40:26.752845  102947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:40:26.752872  102947 start.go:495] detecting cgroup driver to use...
	I0919 22:40:26.752907  102947 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:40:26.752955  102947 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:40:26.771737  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:40:26.788372  102947 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:40:26.788434  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:40:26.810086  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:40:26.828338  102947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:40:26.983767  102947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:40:27.150072  102947 docker.go:234] disabling docker service ...
	I0919 22:40:27.150147  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:40:27.173008  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:40:27.193344  102947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:40:27.317738  102947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:40:27.460983  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:40:27.485592  102947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:40:27.507890  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:40:27.520044  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:40:27.534512  102947 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:40:27.534574  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:40:27.548984  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:27.562483  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:40:27.577519  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:27.592117  102947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:40:27.604075  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:40:27.616958  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:40:27.631964  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:40:27.646292  102947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:40:27.658210  102947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:40:27.672336  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:27.803893  102947 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:40:28.062245  102947 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:40:28.062313  102947 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:40:28.066699  102947 start.go:563] Will wait 60s for crictl version
	I0919 22:40:28.066771  102947 ssh_runner.go:195] Run: which crictl
	I0919 22:40:28.071489  102947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:40:28.109371  102947 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:40:28.109444  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:28.135369  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:28.166192  102947 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:40:28.167830  102947 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:40:28.169229  102947 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:40:28.170416  102947 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:40:28.189509  102947 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:40:28.193804  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:28.206515  102947 mustload.go:65] Loading cluster: ha-326307
	I0919 22:40:28.206800  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:28.207069  102947 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:40:28.226787  102947 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:40:28.227094  102947 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.4
	I0919 22:40:28.227201  102947 certs.go:194] generating shared ca certs ...
	I0919 22:40:28.227247  102947 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:28.227424  102947 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:40:28.227487  102947 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:40:28.227504  102947 certs.go:256] generating profile certs ...
	I0919 22:40:28.227586  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:40:28.227634  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604
	I0919 22:40:28.227713  102947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:40:28.227730  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:40:28.227749  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:40:28.227764  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:40:28.227783  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:40:28.227800  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:40:28.227819  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:40:28.227839  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:40:28.227862  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:40:28.227929  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:40:28.227971  102947 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:40:28.227984  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:40:28.228019  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:40:28.228051  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:40:28.228082  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:40:28.228166  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:28.228213  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:40:28.228239  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:40:28.228259  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:28.228383  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:28.247785  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:28.336571  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:40:28.341071  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:40:28.354226  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:40:28.358563  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:40:28.373723  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:40:28.378406  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:40:28.394406  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:40:28.399415  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:40:28.416091  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:40:28.420161  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:40:28.435710  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:40:28.439831  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:40:28.454973  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:40:28.488291  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:40:28.520386  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:40:28.548878  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:40:28.577674  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:40:28.606894  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:40:28.635467  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:40:28.664035  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:40:28.692528  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:40:28.721969  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:40:28.750129  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:40:28.777226  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:40:28.798416  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:40:28.818429  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:40:28.844040  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:40:28.875418  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:40:28.898298  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:40:28.918961  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:40:28.940259  102947 ssh_runner.go:195] Run: openssl version
	I0919 22:40:28.946752  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:40:28.959425  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:28.964456  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:28.964528  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:28.973714  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:40:28.984876  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:40:28.996258  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:40:29.000541  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:40:29.000605  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:40:29.008599  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:40:29.018788  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:40:29.030314  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:40:29.034634  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:40:29.034700  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:40:29.042685  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:40:29.052467  102947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:40:29.056255  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:40:29.063105  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:40:29.071819  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:40:29.079410  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:40:29.086705  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:40:29.094001  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:40:29.101257  102947 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0919 22:40:29.101378  102947 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:40:29.101410  102947 kube-vip.go:115] generating kube-vip config ...
	I0919 22:40:29.101456  102947 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:40:29.115062  102947 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:40:29.115120  102947 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:40:29.115184  102947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:40:29.124866  102947 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:40:29.124920  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:40:29.135111  102947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:40:29.156313  102947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:40:29.177045  102947 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:40:29.198544  102947 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:40:29.203037  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:29.216695  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:29.333585  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:29.349312  102947 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:40:29.349626  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:29.352738  102947 out.go:179] * Verifying Kubernetes components...
	I0919 22:40:29.354445  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:29.474185  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:29.488500  102947 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:40:29.488573  102947 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:40:29.488783  102947 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m03" to be "Ready" ...
	I0919 22:40:29.492092  102947 node_ready.go:49] node "ha-326307-m03" is "Ready"
	I0919 22:40:29.492121  102947 node_ready.go:38] duration metric: took 3.321791ms for node "ha-326307-m03" to be "Ready" ...
	I0919 22:40:29.492134  102947 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:40:29.492205  102947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:40:29.506850  102947 api_server.go:72] duration metric: took 157.484065ms to wait for apiserver process to appear ...
	I0919 22:40:29.506886  102947 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:40:29.506910  102947 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:40:29.511130  102947 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:40:29.512015  102947 api_server.go:141] control plane version: v1.34.0
	I0919 22:40:29.512036  102947 api_server.go:131] duration metric: took 5.141712ms to wait for apiserver health ...
	I0919 22:40:29.512043  102947 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:40:29.518744  102947 system_pods.go:59] 24 kube-system pods found
	I0919 22:40:29.518774  102947 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:29.518782  102947 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:29.518787  102947 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:40:29.518791  102947 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:40:29.518796  102947 system_pods.go:61] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:29.518800  102947 system_pods.go:61] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:40:29.518804  102947 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:40:29.518807  102947 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:40:29.518810  102947 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:40:29.518813  102947 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:40:29.518819  102947 system_pods.go:61] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:29.518822  102947 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:40:29.518828  102947 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:29.518858  102947 system_pods.go:61] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:29.518862  102947 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:40:29.518868  102947 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:40:29.518873  102947 system_pods.go:61] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:40:29.518879  102947 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:40:29.518884  102947 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:29.518888  102947 system_pods.go:61] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:29.518894  102947 system_pods.go:61] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:40:29.518897  102947 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:40:29.518900  102947 system_pods.go:61] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:40:29.518905  102947 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:40:29.518910  102947 system_pods.go:74] duration metric: took 6.861836ms to wait for pod list to return data ...
	I0919 22:40:29.518919  102947 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:40:29.521697  102947 default_sa.go:45] found service account: "default"
	I0919 22:40:29.521719  102947 default_sa.go:55] duration metric: took 2.795273ms for default service account to be created ...
	I0919 22:40:29.521728  102947 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:40:29.527102  102947 system_pods.go:86] 24 kube-system pods found
	I0919 22:40:29.527136  102947 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:29.527144  102947 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:29.527166  102947 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:40:29.527174  102947 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:40:29.527181  102947 system_pods.go:89] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:29.527186  102947 system_pods.go:89] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:40:29.527195  102947 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:40:29.527200  102947 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:40:29.527209  102947 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:40:29.527214  102947 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:40:29.527224  102947 system_pods.go:89] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:29.527233  102947 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:40:29.527244  102947 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:29.527251  102947 system_pods.go:89] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:29.527259  102947 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:40:29.527265  102947 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:40:29.527274  102947 system_pods.go:89] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:40:29.527282  102947 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:40:29.527293  102947 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:29.527304  102947 system_pods.go:89] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:29.527311  102947 system_pods.go:89] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:40:29.527318  102947 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:40:29.527326  102947 system_pods.go:89] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:40:29.527331  102947 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:40:29.527342  102947 system_pods.go:126] duration metric: took 5.60777ms to wait for k8s-apps to be running ...
	I0919 22:40:29.527353  102947 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:40:29.527418  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:40:29.540084  102947 system_svc.go:56] duration metric: took 12.720236ms WaitForService to wait for kubelet
	I0919 22:40:29.540114  102947 kubeadm.go:578] duration metric: took 190.753677ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:40:29.540138  102947 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:40:29.543938  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:29.543961  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:29.543977  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:29.543981  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:29.543985  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:29.543988  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:29.543992  102947 node_conditions.go:105] duration metric: took 3.848698ms to run NodePressure ...
	I0919 22:40:29.544002  102947 start.go:241] waiting for startup goroutines ...
	I0919 22:40:29.544021  102947 start.go:255] writing updated cluster config ...
	I0919 22:40:29.546124  102947 out.go:203] 
	I0919 22:40:29.547729  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:29.547827  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:29.549464  102947 out.go:179] * Starting "ha-326307-m04" worker node in "ha-326307" cluster
	I0919 22:40:29.551423  102947 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:40:29.552959  102947 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:40:29.554347  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:29.554374  102947 cache.go:58] Caching tarball of preloaded images
	I0919 22:40:29.554466  102947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:40:29.554528  102947 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:40:29.554544  102947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:40:29.554661  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:29.576604  102947 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:40:29.576623  102947 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:40:29.576636  102947 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:40:29.576658  102947 start.go:360] acquireMachinesLock for ha-326307-m04: {Name:mk65e4f546dae46b7c7a6cfe6f590e09a0a01676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:40:29.576722  102947 start.go:364] duration metric: took 36.867µs to acquireMachinesLock for "ha-326307-m04"
	I0919 22:40:29.576740  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:40:29.576747  102947 fix.go:54] fixHost starting: m04
	I0919 22:40:29.576991  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:40:29.599524  102947 fix.go:112] recreateIfNeeded on ha-326307-m04: state=Stopped err=<nil>
	W0919 22:40:29.599554  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:40:29.601341  102947 out.go:252] * Restarting existing docker container for "ha-326307-m04" ...
	I0919 22:40:29.601436  102947 cli_runner.go:164] Run: docker start ha-326307-m04
	I0919 22:40:29.856928  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:40:29.877141  102947 kic.go:430] container "ha-326307-m04" state is running.
	I0919 22:40:29.877564  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m04
	I0919 22:40:29.898099  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:29.898353  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:40:29.898408  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	I0919 22:40:29.919242  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:29.919493  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32834 <nil> <nil>}
	I0919 22:40:29.919509  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:40:29.920238  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53392->127.0.0.1:32834: read: connection reset by peer
	I0919 22:40:32.921592  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:35.923978  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:38.925460  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:41.925968  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:44.927435  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:47.928879  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:50.930439  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:53.931750  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:56.932223  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:59.933541  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:02.934449  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:05.936468  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:08.938720  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:11.939132  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:14.940311  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:17.941338  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:20.943720  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:23.944321  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:26.945127  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:29.946482  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:32.947311  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:35.949504  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:38.950829  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:41.951282  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:44.951718  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:47.952886  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:50.954501  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:53.955026  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:56.955566  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:59.956458  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:02.958263  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:05.960452  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:08.960827  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:11.961991  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:14.963364  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:17.964467  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:20.966794  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:23.967257  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:26.968419  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:29.969450  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:32.970449  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:35.972383  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:38.974402  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:41.974947  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:44.975961  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:47.977119  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:50.979045  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:53.979535  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:56.980106  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:59.981632  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:02.983145  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:05.985114  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:08.987742  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:11.988246  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:14.988636  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:17.990247  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:20.990690  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:23.991025  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:26.992363  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:29.994267  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:43:29.994298  102947 ubuntu.go:182] provisioning hostname "ha-326307-m04"
	I0919 22:43:29.994384  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:30.014799  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:30.014894  102947 machine.go:96] duration metric: took 3m0.116525554s to provisionDockerMachine
	I0919 22:43:30.014980  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:43:30.015024  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:30.033859  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:30.033976  102947 retry.go:31] will retry after 180.600333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:30.215391  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:30.234687  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:30.234800  102947 retry.go:31] will retry after 396.872897ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:30.632462  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:30.651421  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:30.651553  102947 retry.go:31] will retry after 330.021621ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:30.982141  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:31.001874  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:31.001981  102947 retry.go:31] will retry after 902.78257ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:31.905550  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:31.924562  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:43:31.924688  102947 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:31.924702  102947 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:31.924747  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:43:31.924776  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:31.944532  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:31.944644  102947 retry.go:31] will retry after 370.439297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:32.316311  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:32.335705  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:32.335801  102947 retry.go:31] will retry after 471.735503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:32.808402  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:32.828725  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:32.828845  102947 retry.go:31] will retry after 653.918581ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:33.483771  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:33.505126  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:43:33.505274  102947 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:33.505310  102947 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:33.505321  102947 fix.go:56] duration metric: took 3m3.928573811s for fixHost
	I0919 22:43:33.505333  102947 start.go:83] releasing machines lock for "ha-326307-m04", held for 3m3.928601896s
	W0919 22:43:33.505353  102947 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:33.505432  102947 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:33.505457  102947 start.go:729] Will try again in 5 seconds ...
	I0919 22:43:38.507265  102947 start.go:360] acquireMachinesLock for ha-326307-m04: {Name:mk65e4f546dae46b7c7a6cfe6f590e09a0a01676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:43:38.507371  102947 start.go:364] duration metric: took 72.258µs to acquireMachinesLock for "ha-326307-m04"
	I0919 22:43:38.507394  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:43:38.507402  102947 fix.go:54] fixHost starting: m04
	I0919 22:43:38.507660  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:43:38.526017  102947 fix.go:112] recreateIfNeeded on ha-326307-m04: state=Stopped err=<nil>
	W0919 22:43:38.526047  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:43:38.528104  102947 out.go:252] * Restarting existing docker container for "ha-326307-m04" ...
	I0919 22:43:38.528195  102947 cli_runner.go:164] Run: docker start ha-326307-m04
	I0919 22:43:38.792918  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:43:38.812750  102947 kic.go:430] container "ha-326307-m04" state is running.
	I0919 22:43:38.813122  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m04
	I0919 22:43:38.835015  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:43:38.835331  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:43:38.835404  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	I0919 22:43:38.855863  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:38.856092  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32839 <nil> <nil>}
	I0919 22:43:38.856104  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:43:38.856765  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33486->127.0.0.1:32839: read: connection reset by peer
	I0919 22:43:41.857087  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:44.857460  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:47.858230  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:50.860407  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:53.860840  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:56.862141  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:59.863585  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:02.864745  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:05.867376  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:08.869862  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:11.870894  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:14.871487  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:17.872736  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:20.874506  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:23.875596  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:26.875979  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:29.877435  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:32.878977  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:35.881595  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:38.883657  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:41.884099  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:44.885281  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:47.887113  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:50.889449  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:53.889898  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:56.891131  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:59.893426  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:02.895108  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:05.896902  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:08.899087  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:11.900184  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:14.901096  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:17.902201  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:20.904503  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:23.904962  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:26.906198  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:29.908575  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:32.910119  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:35.912526  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:38.914521  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:41.915090  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:44.916505  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:47.917924  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:50.919469  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:53.919814  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:56.920315  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:59.922657  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:02.924190  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:05.926504  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:08.928432  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:11.929228  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:14.930499  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:17.931536  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:20.934030  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:23.934965  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:26.936258  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:29.938459  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:32.939438  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:35.941457  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:38.943814  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:46:38.943857  102947 ubuntu.go:182] provisioning hostname "ha-326307-m04"
	I0919 22:46:38.943941  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:38.964275  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:38.964337  102947 machine.go:96] duration metric: took 3m0.128991371s to provisionDockerMachine
	I0919 22:46:38.964416  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:46:38.964451  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:38.983816  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:38.983960  102947 retry.go:31] will retry after 364.420464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:39.349386  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:39.369081  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:39.369225  102947 retry.go:31] will retry after 206.788026ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:39.576720  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:39.596502  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:39.596609  102947 retry.go:31] will retry after 511.892744ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:40.109367  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:40.129534  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:40.129648  102947 retry.go:31] will retry after 811.778179ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:40.941718  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:40.962501  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:46:40.962610  102947 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:46:40.962628  102947 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:40.962672  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:46:40.962701  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:40.983319  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:40.983479  102947 retry.go:31] will retry after 310.783714ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:41.295059  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:41.314519  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:41.314654  102947 retry.go:31] will retry after 532.410728ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:41.847306  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:41.866776  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:41.866902  102947 retry.go:31] will retry after 498.480272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:42.366422  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:42.388450  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:46:42.388595  102947 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:46:42.388613  102947 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:42.388623  102947 fix.go:56] duration metric: took 3m3.881222347s for fixHost
	I0919 22:46:42.388631  102947 start.go:83] releasing machines lock for "ha-326307-m04", held for 3m3.881250201s
	W0919 22:46:42.388708  102947 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-326307" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:42.391386  102947 out.go:203] 
	W0919 22:46:42.393146  102947 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:46:42.393190  102947 out.go:285] * 
	W0919 22:46:42.395039  102947 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 22:46:42.396646  102947 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb7d0d80b9c23       6e38f40d628db       5 minutes ago       Running             storage-provisioner       2                   a66e01a465731       storage-provisioner
	fea1c0534d95d       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   c6c63e662186b       kindnet-gxnzs
	fff949799c16f       52546a367cc9e       6 minutes ago       Running             coredns                   1                   d66fcc49f8eef       coredns-66bc5c9577-wqvzd
	9b01ee2966e08       52546a367cc9e       6 minutes ago       Running             coredns                   1                   8915a954c3a5e       coredns-66bc5c9577-9j5pw
	471e8ec48d678       8c811b4aec35f       6 minutes ago       Running             busybox                   1                   4242a65c0c92e       busybox-7b57f96db7-m8swj
	a7d6081c4523a       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       1                   a66e01a465731       storage-provisioner
	c1e4cc3b9a7f1       df0860106674d       6 minutes ago       Running             kube-proxy                1                   bb87d6f8210e1       kube-proxy-8kxtv
	83bc1a5b44143       765655ea60781       6 minutes ago       Running             kube-vip                  0                   8124d18d08f1c       kube-vip-ha-326307
	63dc43f0224fa       46169d968e920       6 minutes ago       Running             kube-scheduler            1                   b84e223a297e4       kube-scheduler-ha-326307
	7a855457ed99a       a0af72f2ec6d6       6 minutes ago       Running             kube-controller-manager   1                   35b9028490f76       kube-controller-manager-ha-326307
	c543ffd76b85c       5f1f5298c888d       6 minutes ago       Running             etcd                      1                   a85600718119d       etcd-ha-326307
	e1a181d28b52f       90550c43ad2bc       6 minutes ago       Running             kube-apiserver            1                   4ff7be1cea576       kube-apiserver-ha-326307
	7791f71e5d5a5       8c811b4aec35f       21 minutes ago      Exited              busybox                   0                   b5e0c0fffea25       busybox-7b57f96db7-m8swj
	ca68bbc020e20       52546a367cc9e       22 minutes ago      Exited              coredns                   0                   132023f334782       coredns-66bc5c9577-9j5pw
	1f618dc8f0392       52546a367cc9e       22 minutes ago      Exited              coredns                   0                   a5ac32b4949ab       coredns-66bc5c9577-wqvzd
	365cc00c2e009       409467f978b4a       23 minutes ago      Exited              kindnet-cni               0                   96e027ec2b5fb       kindnet-gxnzs
	bd9e41958ffbb       df0860106674d       23 minutes ago      Exited              kube-proxy                0                   06da62af16945       kube-proxy-8kxtv
	456a0c3cbf5ce       46169d968e920       23 minutes ago      Exited              kube-scheduler            0                   f02b9e82ff9b1       kube-scheduler-ha-326307
	05ab0247624a7       a0af72f2ec6d6       23 minutes ago      Exited              kube-controller-manager   0                   6026f58e8c23a       kube-controller-manager-ha-326307
	e5c59a6abe977       5f1f5298c888d       23 minutes ago      Exited              etcd                      0                   5f89382a468ad       etcd-ha-326307
	e80d65e3c7c18       90550c43ad2bc       23 minutes ago      Exited              kube-apiserver            0                   3813626701bd1       kube-apiserver-ha-326307
	
	
	==> containerd <==
	Sep 19 22:40:50 ha-326307 containerd[478]: time="2025-09-19T22:40:50.496292846Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 19 22:40:50 ha-326307 containerd[478]: time="2025-09-19T22:40:50.941042111Z" level=info msg="RemoveContainer for \"f52d2d9f5881b5d50f95a3aeef2c876d51dd6b2be6c464a7464b26e8175b0fc6\""
	Sep 19 22:40:50 ha-326307 containerd[478]: time="2025-09-19T22:40:50.945894995Z" level=info msg="RemoveContainer for \"f52d2d9f5881b5d50f95a3aeef2c876d51dd6b2be6c464a7464b26e8175b0fc6\" returns successfully"
	Sep 19 22:41:02 ha-326307 containerd[478]: time="2025-09-19T22:41:02.735151860Z" level=info msg="CreateContainer within sandbox \"a66e01a46573183df8e2c6c041cfc03b30ce85291f726f529b293f52bca48ac9\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
	Sep 19 22:41:02 ha-326307 containerd[478]: time="2025-09-19T22:41:02.750197533Z" level=info msg="CreateContainer within sandbox \"a66e01a46573183df8e2c6c041cfc03b30ce85291f726f529b293f52bca48ac9\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"bb7d0d80b9c2303a244cfffc060085bd15528b57546277cdaaaf2dd707b68f1f\""
	Sep 19 22:41:02 ha-326307 containerd[478]: time="2025-09-19T22:41:02.750866519Z" level=info msg="StartContainer for \"bb7d0d80b9c2303a244cfffc060085bd15528b57546277cdaaaf2dd707b68f1f\""
	Sep 19 22:41:02 ha-326307 containerd[478]: time="2025-09-19T22:41:02.809028664Z" level=info msg="StartContainer for \"bb7d0d80b9c2303a244cfffc060085bd15528b57546277cdaaaf2dd707b68f1f\" returns successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.721548399Z" level=info msg="RemoveContainer for \"d1c181558b58c3ebc7dae5df97b80119a8c6d18dc19b0532b61999c4db0c6668\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.726063631Z" level=info msg="RemoveContainer for \"d1c181558b58c3ebc7dae5df97b80119a8c6d18dc19b0532b61999c4db0c6668\" returns successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.728293194Z" level=info msg="StopPodSandbox for \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.728427999Z" level=info msg="TearDown network for sandbox \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\" successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.728450762Z" level=info msg="StopPodSandbox for \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\" returns successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.728930508Z" level=info msg="RemovePodSandbox for \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.728969583Z" level=info msg="Forcibly stopping sandbox \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.729045579Z" level=info msg="TearDown network for sandbox \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\" successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.733274152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.733381747Z" level=info msg="RemovePodSandbox \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\" returns successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734017576Z" level=info msg="StopPodSandbox for \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734138515Z" level=info msg="TearDown network for sandbox \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\" successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734174247Z" level=info msg="StopPodSandbox for \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\" returns successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734599814Z" level=info msg="RemovePodSandbox for \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734628547Z" level=info msg="Forcibly stopping sandbox \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734699211Z" level=info msg="TearDown network for sandbox \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\" successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.738452443Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.738554754Z" level=info msg="RemovePodSandbox \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\" returns successfully"
	
	
	==> coredns [1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6] <==
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54337 - 24572 "HINFO IN 5143313645322175939.5313042790825403134. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069732464s
	[INFO] 10.244.0.4:35490 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000326279s
	[INFO] 10.244.0.4:55330 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.014239882s
	[INFO] 10.244.1.2:39628 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210602s
	[INFO] 10.244.1.2:46891 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.001261026s
	[INFO] 10.244.1.2:43124 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.00098216s
	[INFO] 10.244.1.2:49555 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.00013424s
	[INFO] 10.244.0.4:40362 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00024135s
	[INFO] 10.244.0.4:45629 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168694s
	[INFO] 10.244.1.2:52354 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189457s
	[INFO] 10.244.1.2:43857 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161715s
	[INFO] 10.244.1.2:51922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145764s
	[INFO] 10.244.1.2:57320 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009888497s
	[INFO] 10.244.1.2:49841 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169285s
	[INFO] 10.244.0.4:51548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159656s
	[INFO] 10.244.0.4:48681 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110507s
	[INFO] 10.244.1.2:52993 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137337s
	[INFO] 10.244.0.4:59855 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113524s
	[INFO] 10.244.1.2:56284 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188608s
	[INFO] 10.244.1.2:58675 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149085s
	[INFO] 10.244.1.2:38911 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131505s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9b01ee2966e081085b732d62e68985fd9249574188499e7e99fa53ff3e585c2d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35530 - 6163 "HINFO IN 6373030861249236477.4474115650148028833. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02205233s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93] <==
	[INFO] 127.0.0.1:49588 - 50300 "HINFO IN 9047056621409016881.2982736294753326061. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063128768s
	[INFO] 10.244.0.4:39328 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.014249598s
	[INFO] 10.244.0.4:59759 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.013332957s
	[INFO] 10.244.0.4:50336 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.008865788s
	[INFO] 10.244.1.2:42753 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000158745s
	[INFO] 10.244.0.4:52334 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261159s
	[INFO] 10.244.0.4:43558 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010645205s
	[INFO] 10.244.0.4:51059 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154122s
	[INFO] 10.244.0.4:46147 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012594143s
	[INFO] 10.244.0.4:39163 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143755s
	[INFO] 10.244.0.4:57061 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014731s
	[INFO] 10.244.1.2:59502 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000129746s
	[INFO] 10.244.1.2:49570 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172915s
	[INFO] 10.244.1.2:48519 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000175653s
	[INFO] 10.244.0.4:50569 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000326714s
	[INFO] 10.244.0.4:45465 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000234038s
	[INFO] 10.244.1.2:52569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176154s
	[INFO] 10.244.1.2:36719 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000205481s
	[INFO] 10.244.1.2:58705 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195468s
	[INFO] 10.244.0.4:38035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216169s
	[INFO] 10.244.0.4:52287 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129599s
	[INFO] 10.244.0.4:37285 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000186803s
	[INFO] 10.244.1.2:39163 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165716s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fff949799c16ffb392a665b0e5af2f326948a468e2495b8ea2fa176e06b5cfbf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60701 - 36326 "HINFO IN 1706815658337671432.2830354807318160675. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06080012s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-326307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_23_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:23:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:46:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:45:24 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:45:24 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:45:24 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:45:24 +0000   Fri, 19 Sep 2025 22:23:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-326307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6ba0924deaa4643b45558c406a92530
	  System UUID:                9c3f30ed-68b2-4a1c-af95-9031ae210a78
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-m8swj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-66bc5c9577-9j5pw             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23m
	  kube-system                 coredns-66bc5c9577-wqvzd             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23m
	  kube-system                 etcd-ha-326307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         23m
	  kube-system                 kindnet-gxnzs                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23m
	  kube-system                 kube-apiserver-ha-326307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-326307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-8kxtv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-326307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-326307                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m23s                  kube-proxy       
	  Normal  Starting                 23m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)      kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)      kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)      kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                    kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  Starting                 23m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    23m                    kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  23m                    kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                    node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           8m3s                   node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  Starting                 6m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m31s (x8 over 6m31s)  kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s (x8 over 6m31s)  kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m31s (x7 over 6m31s)  kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m21s                  node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           6m21s                  node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           6m15s                  node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	
	
	Name:               ha-326307-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:46:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:43:43 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:43:43 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:43:43 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:43:43 +0000   Fri, 19 Sep 2025 22:24:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-326307-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 9fd69bf7d4de4d0cb4316de818a4daa2
	  System UUID:                8095cd89-f43b-4d8a-adef-b40d6aaa7ad2
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tfpvf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 etcd-ha-326307-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         22m
	  kube-system                 kindnet-mk6pv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22m
	  kube-system                 kube-apiserver-ha-326307-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-ha-326307-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-q8mtj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-ha-326307-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-vip-ha-326307-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 22m                    kube-proxy       
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  NodeAllocatableEnforced  8m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m10s (x7 over 8m10s)  kubelet          Node ha-326307-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m10s (x8 over 8m10s)  kubelet          Node ha-326307-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m10s (x8 over 8m10s)  kubelet          Node ha-326307-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m10s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m4s                   node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  Starting                 6m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m29s (x8 over 6m29s)  kubelet          Node ha-326307-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m29s (x8 over 6m29s)  kubelet          Node ha-326307-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m29s (x7 over 6m29s)  kubelet          Node ha-326307-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m22s                  node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           6m22s                  node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	
	
	Name:               ha-326307-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:46:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:46:22 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:46:22 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:46:22 +0000   Fri, 19 Sep 2025 22:24:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:46:22 +0000   Fri, 19 Sep 2025 22:24:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-326307-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66949d767c1d4946893c2c27acfe311d
	  System UUID:                5814a8d4-c435-490f-8e5e-a8b038e01be7
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-jdczt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 etcd-ha-326307-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         22m
	  kube-system                 kindnet-dmxl8                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22m
	  kube-system                 kube-apiserver-ha-326307-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-ha-326307-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-ws89d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-ha-326307-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-vip-ha-326307-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode           8m4s                   node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode           6m22s                  node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  RegisteredNode           6m22s                  node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	  Normal  Starting                 6m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m21s (x9 over 6m21s)  kubelet          Node ha-326307-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s (x7 over 6m21s)  kubelet          Node ha-326307-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m21s (x7 over 6m21s)  kubelet          Node ha-326307-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-326307-m03 event: Registered Node ha-326307-m03 in Controller
	
	
	==> dmesg <==
	[Sep19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001853] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001006] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.090013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.467231] i8042: Warning: Keylock active
	[  +0.009747] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004210] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001075] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000901] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000908] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001042] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001465] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.534799] block sda: the capability attribute has been deprecated.
	[  +0.099617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027269] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.089616] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6] <==
	{"level":"warn","ts":"2025-09-19T22:40:21.590359Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:40:21.615632Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:40:21.637864Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:40:21.666227Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:40:21.670202Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:40:21.686982Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:40:21.716264Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:40:21.787879Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:40:21.798960Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:40:21.802789Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:40:21.811861Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:40:21.815288Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:40:21.815495Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:40:21.836693Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:40:23.589404Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5512420eb470d1ce","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:40:23.589432Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5512420eb470d1ce","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:40:23.726363Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"5512420eb470d1ce","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:40:23.726421Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"5512420eb470d1ce","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-09-19T22:40:24.172750Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5512420eb470d1ce","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-19T22:40:24.172798Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:40:24.172834Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:40:24.177593Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5512420eb470d1ce","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-19T22:40:24.177644Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:40:24.185512Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:40:24.185980Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	
	
	==> etcd [e5c59a6abe97751de42afd27010936d1c3a401fad6cd730e75a1692a895b4fbc] <==
	{"level":"info","ts":"2025-09-19T22:39:52.140938Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-09-19T22:39:52.162339Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082596421131,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-19T22:39:52.340049Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"1.996479221s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-09-19T22:39:52.340124Z","caller":"traceutil/trace.go:172","msg":"trace[586308872] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"1.996568167s","start":"2025-09-19T22:39:50.343542Z","end":"2025-09-19T22:39:52.340111Z","steps":["trace[586308872] 'agreement among raft nodes before linearized reading'  (duration: 1.996477658s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:39:52.340628Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:39:50.343527Z","time spent":"1.997078725s","remote":"127.0.0.1:36004","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2025/09/19 22:39:52 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-09-19T22:39:52.496622Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:39:45.496513Z","time spent":"7.000101766s","remote":"127.0.0.1:36464","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	{"level":"warn","ts":"2025-09-19T22:39:52.664567Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082596421131,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-19T22:39:53.164691Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082596421131,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-19T22:39:53.664930Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082596421131,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-09-19T22:39:53.841224Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-19T22:39:53.841312Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-19T22:39:53.841335Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4070] sent MsgPreVote request to 5512420eb470d1ce at term 2"}
	{"level":"info","ts":"2025-09-19T22:39:53.841349Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4070] sent MsgPreVote request to e4477a6cd7815365 at term 2"}
	{"level":"info","ts":"2025-09-19T22:39:53.841387Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-09-19T22:39:53.841403Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-09-19T22:39:53.856629Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"10.006331529s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-09-19T22:39:53.856703Z","caller":"traceutil/trace.go:172","msg":"trace[357958415] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"10.006425985s","start":"2025-09-19T22:39:43.850264Z","end":"2025-09-19T22:39:53.856690Z","steps":["trace[357958415] 'agreement among raft nodes before linearized reading'  (duration: 10.006330214s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:39:53.856753Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:39:43.850240Z","time spent":"10.006497987s","remote":"127.0.0.1:36302","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	2025/09/19 22:39:53 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-09-19T22:39:54.165033Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082596421131,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-19T22:39:54.350624Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"1.999804258s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context deadline exceeded"}
	{"level":"info","ts":"2025-09-19T22:39:54.350972Z","caller":"traceutil/trace.go:172","msg":"trace[1511115829] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"2.00016656s","start":"2025-09-19T22:39:52.350791Z","end":"2025-09-19T22:39:54.350957Z","steps":["trace[1511115829] 'agreement among raft nodes before linearized reading'  (duration: 1.999802512s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:39:54.351034Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:39:52.350777Z","time spent":"2.000237823s","remote":"127.0.0.1:35978","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2025/09/19 22:39:54 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> kernel <==
	 22:46:44 up  1:29,  0 users,  load average: 2.16, 1.37, 1.08
	Linux ha-326307 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [365cc00c2e009eeed7e71d1202a4d406c12f0d9faee38762ba691eb2d7c71f89] <==
	I0919 22:39:10.992568       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:20.990595       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:39:20.990634       1 main.go:301] handling current node
	I0919 22:39:20.990655       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:39:20.990663       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:39:20.990874       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:39:20.990888       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:30.995276       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:39:30.995312       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:30.995572       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:39:30.995598       1 main.go:301] handling current node
	I0919 22:39:30.995611       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:39:30.995615       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:39:40.996306       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:39:40.996354       1 main.go:301] handling current node
	I0919 22:39:40.996386       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:39:40.996395       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:39:40.996628       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:39:40.996654       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:50.991728       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:39:50.991865       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:39:50.992227       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:39:50.992324       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:50.992803       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:39:50.992828       1 main.go:301] handling current node
	
	
	==> kindnet [fea1c0534d95d8681a40f476ef920c8ced5eb8897a63d871e66830a2e35509fc] <==
	I0919 22:46:01.328655       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:46:11.327579       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:46:11.327629       1 main.go:301] handling current node
	I0919 22:46:11.327653       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:46:11.327662       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:46:11.327920       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:46:11.327938       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:46:21.328030       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:46:21.328073       1 main.go:301] handling current node
	I0919 22:46:21.328087       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:46:21.328093       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:46:21.328336       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:46:21.328349       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:46:31.327485       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:46:31.327520       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:46:31.327776       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:46:31.327794       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:46:31.327897       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:46:31.327908       1 main.go:301] handling current node
	I0919 22:46:41.328117       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:46:41.328176       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:46:41.328398       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:46:41.328415       1 main.go:301] handling current node
	I0919 22:46:41.328447       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:46:41.328457       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5] <==
	I0919 22:40:19.279381       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	W0919 22:40:19.281370       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3 192.168.49.4]
	I0919 22:40:19.295421       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0919 22:40:19.295734       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:40:19.295813       1 policy_source.go:240] refreshing policies
	I0919 22:40:19.318977       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 22:40:19.385137       1 controller.go:667] quota admission added evaluator for: endpoints
	E0919 22:40:19.394148       1 controller.go:97] Error removing old endpoints from kubernetes service: Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:40:19.817136       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0919 22:40:20.175946       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 22:40:21.106965       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I0919 22:40:21.115392       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 22:40:22.902022       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:40:23.000359       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 22:40:23.094961       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0919 22:41:31.899871       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:41:34.521052       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:42:39.388525       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:42:45.838122       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:43:41.302570       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:44:00.530191       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:44:44.037874       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:45:10.813928       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:46:01.956836       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:46:26.916270       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [e80d65e3c7c18da87f7fa003e39382f5a4285ba4782fc295197421c6b882a161] <==
	E0919 22:39:54.523383       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.523431       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.526237       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.526320       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.522979       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527081       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527220       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527341       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527429       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527492       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527556       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527638       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528262       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528338       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528394       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528418       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528451       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528480       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528501       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533700       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533915       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533941       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533972       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533985       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533997       1 watcher.go:335] watch chan error: etcdserver: no leader
	
	
	==> kube-controller-manager [05ab0247624a7f8ffa6bc948e3abc3adc49911c297291eb0a6dd42e3df39f4cd] <==
	I0919 22:23:38.744532       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:23:38.744726       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0919 22:23:38.744739       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0919 22:23:38.744729       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:23:38.744737       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:23:38.744759       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 22:23:38.745195       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 22:23:38.745255       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:23:38.746448       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:23:38.748706       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 22:23:38.750017       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.750086       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:23:38.751270       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 22:23:38.760899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 22:23:38.760926       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 22:23:38.760971       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 22:23:38.765332       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.771790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:08.307746       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m02\" does not exist"
	I0919 22:24:08.319829       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:24:08.699971       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m02"
	E0919 22:24:31.036531       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8ztpb failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8ztpb\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:24:31.706808       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m03\" does not exist"
	I0919 22:24:31.736561       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:24:33.715916       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m03"
	
	
	==> kube-controller-manager [7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c] <==
	I0919 22:40:22.614846       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:40:22.614855       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 22:40:22.616016       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0919 22:40:22.622579       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0919 22:40:22.624722       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 22:40:22.626205       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:40:22.627256       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 22:40:22.631207       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:40:22.638798       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:40:22.639864       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0919 22:40:22.639886       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:40:22.639904       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0919 22:40:22.640312       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 22:40:22.640328       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:40:22.640420       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 22:40:22.640606       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307"
	I0919 22:40:22.640638       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m03"
	I0919 22:40:22.640606       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m02"
	I0919 22:40:22.640694       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 22:40:22.946089       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-49s8d\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:40:22.946224       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e75e0609-6186-44fd-8674-15383f700490", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-49s8d": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:40:56.500901       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-49s8d\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:40:56.501810       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e75e0609-6186-44fd-8674-15383f700490", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-49s8d": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:40:57.687491       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-49s8d\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:40:57.688223       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e75e0609-6186-44fd-8674-15383f700490", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-49s8d": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [bd9e41958ffbbb27dab3d180a56fb27df36a1a1896db3a6f322a8aabcda57677] <==
	I0919 22:23:40.183862       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:23:40.251957       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:23:40.353105       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:23:40.353291       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:23:40.353503       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:23:40.383440       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:23:40.383522       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:23:40.391534       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:23:40.391999       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:23:40.392045       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:23:40.394189       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:23:40.394304       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:23:40.394470       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:23:40.394480       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:23:40.394266       1 config.go:200] "Starting service config controller"
	I0919 22:23:40.394506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:23:40.394279       1 config.go:309] "Starting node config controller"
	I0919 22:23:40.394533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:23:40.394540       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:23:40.494617       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:23:40.494643       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:23:40.494649       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c1e4cc3b9a7f1259a1339b951fd30079b99dc7acedc895c7ae90814405daad16] <==
	I0919 22:40:20.575328       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:40:20.672061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:40:20.772951       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:40:20.773530       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:40:20.774779       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:40:20.837591       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:40:20.837664       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:40:20.853483       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:40:20.853910       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:40:20.853934       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:40:20.859319       1 config.go:309] "Starting node config controller"
	I0919 22:40:20.859436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:40:20.859447       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:40:20.859941       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:40:20.859974       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:40:20.860439       1 config.go:200] "Starting service config controller"
	I0919 22:40:20.860604       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:40:20.861833       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:40:20.862286       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:40:20.960109       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:40:20.960793       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:40:20.962617       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [456a0c3cbf5ce028b9cbac658728c1fee13ad8e2659bfa0c625cd685d711c708] <==
	E0919 22:23:33.115040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:23:33.129278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:23:33.194774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:23:33.337699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0919 22:23:35.055089       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:24:08.346116       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:08.346301       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:08.365410       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-78xs2" node="ha-326307-m02"
	E0919 22:24:08.365600       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="kube-system/kindnet-78xs2"
	E0919 22:24:08.379248       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"kindnet-78xs2\" not found" pod="kube-system/kindnet-78xs2"
	E0919 22:24:10.002296       1 schedule_one.go:975] "Scheduler cache AssumePod failed" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:10.002334       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	I0919 22:24:10.002368       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:31.751287       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-pnj9r" node="ha-326307-m03"
	E0919 22:24:31.751375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-pnj9r"
	E0919 22:24:31.887089       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:31.887576       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 173e48ec-ef56-4824-9f55-a04b199b7943(kube-system/kindnet-qxwpq) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	E0919 22:24:31.887605       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	I0919 22:24:31.888969       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:35.828083       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-nzzlb" node="ha-326307-m03"
	E0919 22:24:35.828187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-nzzlb"
	E0919 22:24:35.839864       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	E0919 22:24:35.839940       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ba4fd407-2e93-4324-ab2d-4f192d79fdf5(kube-system/kindnet-dmxl8) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	E0919 22:24:35.839964       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	I0919 22:24:35.841757       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	
	
	==> kube-scheduler [63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284] <==
	I0919 22:40:14.121705       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:40:19.175600       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 22:40:19.175869       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 22:40:19.175952       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:40:19.175968       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:40:19.217556       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:40:19.217674       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:40:19.220816       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:40:19.221038       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:40:19.226224       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:40:19.226332       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:40:19.321477       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.402545     619 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.403468     619 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:40:19 ha-326307 kubelet[619]: E0919 22:40:19.407687     619 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-326307\" already exists" pod="kube-system/kube-apiserver-ha-326307"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.701084     619 apiserver.go:52] "Watching apiserver"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.707631     619 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-326307" podUID="36baecf0-60bd-41c0-a3c8-45e4f6ebddad"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.728881     619 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-326307"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.728907     619 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-326307"
	Sep 19 22:40:19 ha-326307 kubelet[619]: E0919 22:40:19.731920     619 status_manager.go:1041] "Failed to update status for pod" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36baecf0-60bd-41c0-a3c8-45e4f6ebddad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-19T22:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-19T22:40:12Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-19T22:40:13Z\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-19T22:40:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-19T22:40:12Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\
\\"containerd://83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad\\\",\\\"image\\\":\\\"ghcr.io/kube-vip/kube-vip:v1.0.0\\\",\\\"imageID\\\":\\\"ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-vip\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-19T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/admin.conf\\\",\\\"name\\\":\\\"kubeconfig\\\"}]}],\\\"startTime\\\":\\\"2025-09-19T22:40:12Z\\\"}}\" for pod \"kube-system\"/\"kube-vip-ha-326307\": pods \"kube-vip-ha-326307\" not found" pod="kube-system/kube-vip-ha-326307"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.801129     619 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813377     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cafe04c6-2dce-4b93-b6d1-205efc39b360-tmp\") pod \"storage-provisioner\" (UID: \"cafe04c6-2dce-4b93-b6d1-205efc39b360\") " pod="kube-system/storage-provisioner"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813554     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-xtables-lock\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813666     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70be5fcc-7ab6-4eb1-870d-988fee1a01bb-xtables-lock\") pod \"kube-proxy-8kxtv\" (UID: \"70be5fcc-7ab6-4eb1-870d-988fee1a01bb\") " pod="kube-system/kube-proxy-8kxtv"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813815     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-lib-modules\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813849     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70be5fcc-7ab6-4eb1-870d-988fee1a01bb-lib-modules\") pod \"kube-proxy-8kxtv\" (UID: \"70be5fcc-7ab6-4eb1-870d-988fee1a01bb\") " pod="kube-system/kube-proxy-8kxtv"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813876     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-cni-cfg\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.823375     619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-326307" podStartSLOduration=0.823354362 podStartE2EDuration="823.354362ms" podCreationTimestamp="2025-09-19 22:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:40:19.822728814 +0000 UTC m=+7.186818639" watchObservedRunningTime="2025-09-19 22:40:19.823354362 +0000 UTC m=+7.187444186"
	Sep 19 22:40:20 ha-326307 kubelet[619]: I0919 22:40:20.739430     619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fb2219973c6b37a95b47a05e51f4922" path="/var/lib/kubelet/pods/5fb2219973c6b37a95b47a05e51f4922/volumes"
	Sep 19 22:40:21 ha-326307 kubelet[619]: I0919 22:40:21.854071     619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 19 22:40:26 ha-326307 kubelet[619]: I0919 22:40:26.469144     619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 19 22:40:27 ha-326307 kubelet[619]: I0919 22:40:27.660037     619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 19 22:40:50 ha-326307 kubelet[619]: I0919 22:40:50.939471     619 scope.go:117] "RemoveContainer" containerID="f52d2d9f5881b5d50f95a3aeef2c876d51dd6b2be6c464a7464b26e8175b0fc6"
	Sep 19 22:40:50 ha-326307 kubelet[619]: I0919 22:40:50.939831     619 scope.go:117] "RemoveContainer" containerID="a7d6081c4523a1615c9325b1139e2303619e28b6fc78896684594ac51dc7c0d2"
	Sep 19 22:40:50 ha-326307 kubelet[619]: E0919 22:40:50.940028     619 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cafe04c6-2dce-4b93-b6d1-205efc39b360)\"" pod="kube-system/storage-provisioner" podUID="cafe04c6-2dce-4b93-b6d1-205efc39b360"
	Sep 19 22:41:02 ha-326307 kubelet[619]: I0919 22:41:02.729182     619 scope.go:117] "RemoveContainer" containerID="a7d6081c4523a1615c9325b1139e2303619e28b6fc78896684594ac51dc7c0d2"
	Sep 19 22:41:12 ha-326307 kubelet[619]: I0919 22:41:12.720023     619 scope.go:117] "RemoveContainer" containerID="d1c181558b58c3ebc7dae5df97b80119a8c6d18dc19b0532b61999c4db0c6668"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-326307 -n ha-326307
helpers_test.go:269: (dbg) Run:  kubectl --context ha-326307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-jdczt
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-326307 describe pod busybox-7b57f96db7-jdczt
helpers_test.go:290: (dbg) kubectl --context ha-326307 describe pod busybox-7b57f96db7-jdczt:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-jdczt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ha-326307-m03/192.168.49.4
	Start Time:       Fri, 19 Sep 2025 22:25:18 +0000
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwg8l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xwg8l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                  Age                   From               Message
	  ----     ------                  ----                  ----               -------
	  Warning  FailedScheduling        21m                   default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-jdczt": pod busybox-7b57f96db7-jdczt is already assigned to node "ha-326307-m03"
	  Warning  FailedScheduling        21m                   default-scheduler  running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "busybox-7b57f96db7-jdczt": pod busybox-7b57f96db7-jdczt is already assigned to node "ha-326307-m03"
	  Normal   Scheduled               21m                   default-scheduler  Successfully assigned default/busybox-7b57f96db7-jdczt to ha-326307-m03
	  Warning  FailedCreatePodSandBox  21m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f949ef20a496c1ef4510b9586bfdf0aa02ea1ca9948f762b5576ef36acab80c9": failed to find network info for sandbox "f949ef20a496c1ef4510b9586bfdf0aa02ea1ca9948f762b5576ef36acab80c9"
	  Warning  FailedCreatePodSandBox  21m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "306b20c8e47aaeb0b6ae068e406157020ceddab45da2b4f2ab7d80c0e47f4391": failed to find network info for sandbox "306b20c8e47aaeb0b6ae068e406157020ceddab45da2b4f2ab7d80c0e47f4391"
	  Warning  FailedCreatePodSandBox  21m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e6c50e24733dc1514dd610f1c51f99bc1a57d10929036ae887a87c4b187b9ac1": failed to find network info for sandbox "e6c50e24733dc1514dd610f1c51f99bc1a57d10929036ae887a87c4b187b9ac1"
	  Warning  FailedCreatePodSandBox  20m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9d4e7715d1c071862624112264db649229347a018044c9075df60fb9940c8e8a": failed to find network info for sandbox "9d4e7715d1c071862624112264db649229347a018044c9075df60fb9940c8e8a"
	  Warning  FailedCreatePodSandBox  20m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d188384b76e1cf43ce05a368351c54023e455d3fd0fddf79dc0d717558b93ee6": failed to find network info for sandbox "d188384b76e1cf43ce05a368351c54023e455d3fd0fddf79dc0d717558b93ee6"
	  Warning  FailedCreatePodSandBox  20m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "726dbde7347664ddd373a329867d125e92a7173ca43b01448ce154579a81a0bb": failed to find network info for sandbox "726dbde7347664ddd373a329867d125e92a7173ca43b01448ce154579a81a0bb"
	  Warning  FailedCreatePodSandBox  20m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e45f823e38e8e10ed14077a52e1750763b0c366d8a775bbf53f656c10861f185": failed to find network info for sandbox "e45f823e38e8e10ed14077a52e1750763b0c366d8a775bbf53f656c10861f185"
	  Warning  FailedCreatePodSandBox  19m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "92ae39dcfd89289f5a5fdc5ae0c23196a91b58214916aead7f95620a2697c009": failed to find network info for sandbox "92ae39dcfd89289f5a5fdc5ae0c23196a91b58214916aead7f95620a2697c009"
	  Warning  FailedCreatePodSandBox  19m                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "98a14fefd702a6a8ff4d95d8bebac62053e439ee134d840d76e175ba4e8c45d6": failed to find network info for sandbox "98a14fefd702a6a8ff4d95d8bebac62053e439ee134d840d76e175ba4e8c45d6"
	  Warning  FailedCreatePodSandBox  11m (x39 over 19m)    kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "68cb483b1808e73ea325cca055c7a7f1bd2a591a81aa8b2bcb8cb96560fd08b2": failed to find network info for sandbox "68cb483b1808e73ea325cca055c7a7f1bd2a591a81aa8b2bcb8cb96560fd08b2"
	  Warning  FailedCreatePodSandBox  6m19s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7fe3917c0d6aab1a6ff26d5c4b079d63ad151baf82547e311c168f20a96d5a2f": failed to find network info for sandbox "7fe3917c0d6aab1a6ff26d5c4b079d63ad151baf82547e311c168f20a96d5a2f"
	  Warning  FailedCreatePodSandBox  6m6s                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0db7eeb9931726fb0d7e4b34a94237970db00bbbe016b3d248d2134555a58a39": failed to find network info for sandbox "0db7eeb9931726fb0d7e4b34a94237970db00bbbe016b3d248d2134555a58a39"
	  Warning  FailedCreatePodSandBox  5m52s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0bdf58dcce86cf27ee625b35890c2ed39e21e4085264a9534d650a88345d5e1b": failed to find network info for sandbox "0bdf58dcce86cf27ee625b35890c2ed39e21e4085264a9534d650a88345d5e1b"
	  Warning  FailedCreatePodSandBox  5m37s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8ed07fecfa4d7391725cfd9c0c667faf15fbb963d537232427c91fbcd7b8292c": failed to find network info for sandbox "8ed07fecfa4d7391725cfd9c0c667faf15fbb963d537232427c91fbcd7b8292c"
	  Warning  FailedCreatePodSandBox  5m24s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "51ca16759937df03c72e57b982b338a9fedf39c3cf7f484f7914b36206b25ddc": failed to find network info for sandbox "51ca16759937df03c72e57b982b338a9fedf39c3cf7f484f7914b36206b25ddc"
	  Warning  FailedCreatePodSandBox  5m10s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0fef9604ad3c9242a5ea37540f43f228043b623a3a2e743ca8854d410e84bf8e": failed to find network info for sandbox "0fef9604ad3c9242a5ea37540f43f228043b623a3a2e743ca8854d410e84bf8e"
	  Warning  FailedCreatePodSandBox  4m57s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8b7f9292e61bae59e38bef2696828f34bbda6fdc194a8a4cb69956e235db03b0": failed to find network info for sandbox "8b7f9292e61bae59e38bef2696828f34bbda6fdc194a8a4cb69956e235db03b0"
	  Warning  FailedCreatePodSandBox  4m42s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ca19f97a790579e7d03775852e705fefc762a7410df4a105a404c1b8b59faf2b": failed to find network info for sandbox "ca19f97a790579e7d03775852e705fefc762a7410df4a105a404c1b8b59faf2b"
	  Warning  FailedCreatePodSandBox  4m28s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f222f5660aa1603f75c68d4a1c083fd2dd0a78986aba704c963e4814c6b30770": failed to find network info for sandbox "f222f5660aa1603f75c68d4a1c083fd2dd0a78986aba704c963e4814c6b30770"
	  Warning  FailedCreatePodSandBox  47s (x17 over 4m15s)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "39bd42f515a4ccd9e29a6e0c0ea7811e7eb3fab0ef0644027b8aa8c821bc82a6": failed to find network info for sandbox "39bd42f515a4ccd9e29a6e0c0ea7811e7eb3fab0ef0644027b8aa8c821bc82a6"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (425.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-326307 node delete m03 --alsologtostderr -v 5: (6.388199942s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5: exit status 7 (551.90207ms)

                                                
                                                
-- stdout --
	ha-326307
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326307-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:46:51.727656  114554 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:46:51.727754  114554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:46:51.727759  114554 out.go:374] Setting ErrFile to fd 2...
	I0919 22:46:51.727762  114554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:46:51.727980  114554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:46:51.728145  114554 out.go:368] Setting JSON to false
	I0919 22:46:51.728190  114554 mustload.go:65] Loading cluster: ha-326307
	I0919 22:46:51.728239  114554 notify.go:220] Checking for updates...
	I0919 22:46:51.728548  114554 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:46:51.728570  114554 status.go:174] checking status of ha-326307 ...
	I0919 22:46:51.728984  114554 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:46:51.751213  114554 status.go:371] ha-326307 host status = "Running" (err=<nil>)
	I0919 22:46:51.751257  114554 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:46:51.751594  114554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:46:51.771773  114554 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:46:51.772109  114554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:46:51.772174  114554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:46:51.793723  114554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:46:51.889924  114554 ssh_runner.go:195] Run: systemctl --version
	I0919 22:46:51.894738  114554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:46:51.907621  114554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:46:51.971133  114554 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-19 22:46:51.958329507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:46:51.971937  114554 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:46:51.971986  114554 api_server.go:166] Checking apiserver status ...
	I0919 22:46:51.972041  114554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:46:51.985666  114554 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1007/cgroup
	W0919 22:46:51.996556  114554 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1007/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:46:51.996616  114554 ssh_runner.go:195] Run: ls
	I0919 22:46:52.000386  114554 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:46:52.004664  114554 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:46:52.004688  114554 status.go:463] ha-326307 apiserver status = Running (err=<nil>)
	I0919 22:46:52.004697  114554 status.go:176] ha-326307 status: &{Name:ha-326307 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:46:52.004712  114554 status.go:174] checking status of ha-326307-m02 ...
	I0919 22:46:52.004952  114554 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:46:52.024087  114554 status.go:371] ha-326307-m02 host status = "Running" (err=<nil>)
	I0919 22:46:52.024113  114554 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:46:52.024404  114554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:46:52.045524  114554 host.go:66] Checking if "ha-326307-m02" exists ...
	I0919 22:46:52.045868  114554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:46:52.045925  114554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:46:52.065330  114554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:46:52.160523  114554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:46:52.174407  114554 kubeconfig.go:125] found "ha-326307" server: "https://192.168.49.254:8443"
	I0919 22:46:52.174442  114554 api_server.go:166] Checking apiserver status ...
	I0919 22:46:52.174486  114554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:46:52.187303  114554 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/653/cgroup
	W0919 22:46:52.198817  114554 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/653/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:46:52.198865  114554 ssh_runner.go:195] Run: ls
	I0919 22:46:52.202804  114554 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:46:52.207012  114554 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:46:52.207039  114554 status.go:463] ha-326307-m02 apiserver status = Running (err=<nil>)
	I0919 22:46:52.207047  114554 status.go:176] ha-326307-m02 status: &{Name:ha-326307-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:46:52.207067  114554 status.go:174] checking status of ha-326307-m04 ...
	I0919 22:46:52.207344  114554 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:46:52.226899  114554 status.go:371] ha-326307-m04 host status = "Stopped" (err=<nil>)
	I0919 22:46:52.226919  114554 status.go:384] host is not running, skipping remaining checks
	I0919 22:46:52.226925  114554 status.go:176] ha-326307-m04 status: &{Name:ha-326307-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-326307
helpers_test.go:243: (dbg) docker inspect ha-326307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	        "Created": "2025-09-19T22:23:18.619000062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 103141,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:40:06.624789529Z",
	            "FinishedAt": "2025-09-19T22:40:05.96037119Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hosts",
	        "LogPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3-json.log",
	        "Name": "/ha-326307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-326307:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-326307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	                "LowerDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-326307",
	                "Source": "/var/lib/docker/volumes/ha-326307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-326307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-326307",
	                "name.minikube.sigs.k8s.io": "ha-326307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "06e56c61a506ab53aec79a320b27a6a2cf564500e22874ecad29c9521c3f21e9",
	            "SandboxKey": "/var/run/docker/netns/06e56c61a506",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32819"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32820"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32823"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32821"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32822"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-326307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:8a:0a:e2:38:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "465af21e2d8d112a34f14f7c3ab89eac4e6e57f582ff8e32b514381f55dd085e",
	                    "EndpointID": "bf734c63b8ebe83bbbed163afe56c19f4973081d194aed0cefd76108129a5748",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-326307",
	                        "5e0f1fe86b08"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-326307 -n ha-326307
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-326307 logs -n 25: (1.700800871s)
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m02 sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m02.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ cp      │ ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307-m04:/home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp testdata/cp-test.txt ha-326307-m04:/home/docker/cp-test.txt                                                            │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile865153148/001/cp-test_ha-326307-m04.txt │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307:/home/docker/cp-test_ha-326307-m04_ha-326307.txt                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307.txt                                                │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m02:/home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m02 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m03:/home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ node    │ ha-326307 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ node    │ ha-326307 node start m02 --alsologtostderr -v 5                                                                                     │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ node    │ ha-326307 node list --alsologtostderr -v 5                                                                                          │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:39 UTC │                     │
	│ stop    │ ha-326307 stop --alsologtostderr -v 5                                                                                               │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:39 UTC │ 19 Sep 25 22:40 UTC │
	│ start   │ ha-326307 start --wait true --alsologtostderr -v 5                                                                                  │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:40 UTC │                     │
	│ node    │ ha-326307 node list --alsologtostderr -v 5                                                                                          │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:46 UTC │                     │
	│ node    │ ha-326307 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:46 UTC │ 19 Sep 25 22:46 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:40:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:40:06.378966  102947 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:40:06.379330  102947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:40:06.379341  102947 out.go:374] Setting ErrFile to fd 2...
	I0919 22:40:06.379345  102947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:40:06.379571  102947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:40:06.380057  102947 out.go:368] Setting JSON to false
	I0919 22:40:06.381142  102947 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4950,"bootTime":1758316656,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:40:06.381289  102947 start.go:140] virtualization: kvm guest
	I0919 22:40:06.383708  102947 out.go:179] * [ha-326307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:40:06.385240  102947 notify.go:220] Checking for updates...
	I0919 22:40:06.385299  102947 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:40:06.386659  102947 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:40:06.388002  102947 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:40:06.389281  102947 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 22:40:06.390761  102947 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:40:06.392296  102947 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:40:06.394377  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:06.394567  102947 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:40:06.419564  102947 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:40:06.419671  102947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:40:06.482479  102947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:40:06.471430741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:40:06.482585  102947 docker.go:318] overlay module found
	I0919 22:40:06.484475  102947 out.go:179] * Using the docker driver based on existing profile
	I0919 22:40:06.485822  102947 start.go:304] selected driver: docker
	I0919 22:40:06.485843  102947 start.go:918] validating driver "docker" against &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:40:06.485989  102947 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:40:06.486131  102947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:40:06.542030  102947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:40:06.531788772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:40:06.542709  102947 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:40:06.542747  102947 cni.go:84] Creating CNI manager for ""
	I0919 22:40:06.542808  102947 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:40:06.542862  102947 start.go:348] cluster config:
	{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:40:06.544976  102947 out.go:179] * Starting "ha-326307" primary control-plane node in "ha-326307" cluster
	I0919 22:40:06.546636  102947 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:40:06.548781  102947 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:40:06.550349  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:06.550411  102947 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 22:40:06.550421  102947 cache.go:58] Caching tarball of preloaded images
	I0919 22:40:06.550484  102947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:40:06.550539  102947 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:40:06.550548  102947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:40:06.550672  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:06.573025  102947 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:40:06.573049  102947 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:40:06.573066  102947 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:40:06.573093  102947 start.go:360] acquireMachinesLock for ha-326307: {Name:mk42b79b90944aab63c8b37c2f94e04ca1ebec1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:40:06.573185  102947 start.go:364] duration metric: took 59.872µs to acquireMachinesLock for "ha-326307"
	I0919 22:40:06.573210  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:40:06.573217  102947 fix.go:54] fixHost starting: 
	I0919 22:40:06.573525  102947 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:40:06.592648  102947 fix.go:112] recreateIfNeeded on ha-326307: state=Stopped err=<nil>
	W0919 22:40:06.592678  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:40:06.594861  102947 out.go:252] * Restarting existing docker container for "ha-326307" ...
	I0919 22:40:06.594935  102947 cli_runner.go:164] Run: docker start ha-326307
	I0919 22:40:06.849585  102947 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:40:06.870075  102947 kic.go:430] container "ha-326307" state is running.
	I0919 22:40:06.870543  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:40:06.891652  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:06.891897  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:40:06.891960  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:06.913541  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:06.913830  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32819 <nil> <nil>}
	I0919 22:40:06.913845  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:40:06.914579  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60650->127.0.0.1:32819: read: connection reset by peer
	I0919 22:40:10.057342  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:40:10.057370  102947 ubuntu.go:182] provisioning hostname "ha-326307"
	I0919 22:40:10.057448  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:10.076664  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:10.076914  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32819 <nil> <nil>}
	I0919 22:40:10.076932  102947 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307 && echo "ha-326307" | sudo tee /etc/hostname
	I0919 22:40:10.228297  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:40:10.228362  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:10.247319  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:10.247573  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32819 <nil> <nil>}
	I0919 22:40:10.247594  102947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:40:10.386261  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:40:10.386297  102947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:40:10.386346  102947 ubuntu.go:190] setting up certificates
	I0919 22:40:10.386360  102947 provision.go:84] configureAuth start
	I0919 22:40:10.386416  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:40:10.407761  102947 provision.go:143] copyHostCerts
	I0919 22:40:10.407810  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:10.407855  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:40:10.407875  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:10.407957  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:40:10.408069  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:10.408095  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:40:10.408103  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:10.408148  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:40:10.408242  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:10.408268  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:40:10.408278  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:10.408327  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:40:10.408399  102947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307 san=[127.0.0.1 192.168.49.2 ha-326307 localhost minikube]
	I0919 22:40:10.713645  102947 provision.go:177] copyRemoteCerts
	I0919 22:40:10.713742  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:40:10.713785  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:10.733589  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:10.833003  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:40:10.833079  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:40:10.860656  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:40:10.860740  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 22:40:10.888926  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:40:10.889032  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:40:10.916393  102947 provision.go:87] duration metric: took 530.019982ms to configureAuth
	I0919 22:40:10.916415  102947 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:40:10.916623  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:10.916638  102947 machine.go:96] duration metric: took 4.024727048s to provisionDockerMachine
	I0919 22:40:10.916646  102947 start.go:293] postStartSetup for "ha-326307" (driver="docker")
	I0919 22:40:10.916656  102947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:40:10.916705  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:40:10.916774  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:10.935896  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:11.036597  102947 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:40:11.040388  102947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:40:11.040431  102947 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:40:11.040440  102947 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:40:11.040446  102947 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:40:11.040457  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:40:11.040518  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:40:11.040597  102947 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:40:11.040608  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:40:11.040710  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:40:11.050512  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:11.077986  102947 start.go:296] duration metric: took 161.32783ms for postStartSetup
	I0919 22:40:11.078088  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:40:11.078139  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:11.099514  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:11.193605  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:40:11.198421  102947 fix.go:56] duration metric: took 4.625199971s for fixHost
	I0919 22:40:11.198447  102947 start.go:83] releasing machines lock for "ha-326307", held for 4.625246732s
	I0919 22:40:11.198524  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:40:11.217572  102947 ssh_runner.go:195] Run: cat /version.json
	I0919 22:40:11.217596  102947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:40:11.217615  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:11.217666  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:11.238048  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:11.238195  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:11.415017  102947 ssh_runner.go:195] Run: systemctl --version
	I0919 22:40:11.420537  102947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:40:11.425907  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:40:11.447016  102947 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:40:11.447107  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:40:11.457668  102947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:40:11.457703  102947 start.go:495] detecting cgroup driver to use...
	I0919 22:40:11.457740  102947 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:40:11.457803  102947 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:40:11.473712  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:40:11.486915  102947 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:40:11.486970  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:40:11.501818  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:40:11.514985  102947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:40:11.582004  102947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:40:11.651320  102947 docker.go:234] disabling docker service ...
	I0919 22:40:11.651379  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:40:11.665822  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:40:11.678416  102947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:40:11.746878  102947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:40:11.815384  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:40:11.828348  102947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:40:11.847640  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:40:11.859649  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:40:11.871696  102947 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:40:11.871768  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:40:11.883197  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:11.894832  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:40:11.906582  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:11.918458  102947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:40:11.929108  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:40:11.940521  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:40:11.952577  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:40:11.963963  102947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:40:11.974367  102947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:40:11.985259  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:12.050391  102947 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:40:12.169871  102947 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:40:12.169947  102947 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:40:12.174079  102947 start.go:563] Will wait 60s for crictl version
	I0919 22:40:12.174139  102947 ssh_runner.go:195] Run: which crictl
	I0919 22:40:12.177946  102947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:40:12.213111  102947 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:40:12.213183  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:12.237742  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:12.267221  102947 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:40:12.268667  102947 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:40:12.287123  102947 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:40:12.291375  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:12.304417  102947 kubeadm.go:875] updating cluster {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:40:12.304576  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:12.304623  102947 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:40:12.341103  102947 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:40:12.341184  102947 containerd.go:534] Images already preloaded, skipping extraction
	I0919 22:40:12.341271  102947 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:40:12.378884  102947 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:40:12.378907  102947 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:40:12.378916  102947 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0919 22:40:12.379030  102947 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:40:12.379093  102947 ssh_runner.go:195] Run: sudo crictl info
	I0919 22:40:12.415076  102947 cni.go:84] Creating CNI manager for ""
	I0919 22:40:12.415100  102947 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:40:12.415111  102947 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:40:12.415129  102947 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-326307 NodeName:ha-326307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:40:12.415290  102947 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-326307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:40:12.415312  102947 kube-vip.go:115] generating kube-vip config ...
	I0919 22:40:12.415360  102947 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:40:12.428658  102947 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:40:12.428770  102947 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:40:12.428823  102947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:40:12.438647  102947 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:40:12.438722  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:40:12.448707  102947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 22:40:12.468517  102947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:40:12.488929  102947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0919 22:40:12.510232  102947 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:40:12.530559  102947 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:40:12.534624  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:12.548237  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:12.611595  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:12.634054  102947 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.2
	I0919 22:40:12.634076  102947 certs.go:194] generating shared ca certs ...
	I0919 22:40:12.634091  102947 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:12.634256  102947 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:40:12.634323  102947 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:40:12.634335  102947 certs.go:256] generating profile certs ...
	I0919 22:40:12.634435  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:40:12.634462  102947 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.12c02704
	I0919 22:40:12.634473  102947 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.12c02704 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:40:12.848520  102947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.12c02704 ...
	I0919 22:40:12.848550  102947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.12c02704: {Name:mkec91c90022534b703be5f6d2ae62638fdba9da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:12.848737  102947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.12c02704 ...
	I0919 22:40:12.848755  102947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.12c02704: {Name:mka1bfb464462bf578809e209441ee38ad333adf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:12.848871  102947 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.12c02704 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:40:12.849067  102947 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.12c02704 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:40:12.849277  102947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:40:12.849295  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:40:12.849315  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:40:12.849337  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:40:12.849355  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:40:12.849373  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:40:12.849392  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:40:12.849410  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:40:12.849430  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:40:12.849610  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:40:12.849684  102947 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:40:12.849700  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:40:12.849733  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:40:12.849775  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:40:12.849812  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:40:12.849872  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:12.849915  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:40:12.849936  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:40:12.849955  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:12.850570  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:40:12.881412  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:40:12.909365  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:40:12.936570  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:40:12.963699  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:40:12.991460  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:40:13.019268  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:40:13.046670  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:40:13.074069  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:40:13.101424  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:40:13.128690  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:40:13.156653  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:40:13.179067  102947 ssh_runner.go:195] Run: openssl version
	I0919 22:40:13.187620  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:40:13.203083  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:40:13.209838  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:40:13.209911  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:40:13.220919  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:40:13.238903  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:40:13.253729  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:40:13.261626  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:40:13.261780  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:40:13.272880  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:40:13.287661  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:40:13.303848  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:13.308762  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:13.308833  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:13.319788  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:40:13.336323  102947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:40:13.343266  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:40:13.355799  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:40:13.367939  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:40:13.378087  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:40:13.388839  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:40:13.399528  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:40:13.412341  102947 kubeadm.go:392] StartCluster: {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:40:13.412499  102947 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 22:40:13.412584  102947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:40:13.476121  102947 cri.go:89] found id: "83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad"
	I0919 22:40:13.476178  102947 cri.go:89] found id: "63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284"
	I0919 22:40:13.476184  102947 cri.go:89] found id: "7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c"
	I0919 22:40:13.476189  102947 cri.go:89] found id: "c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6"
	I0919 22:40:13.476197  102947 cri.go:89] found id: "e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5"
	I0919 22:40:13.476204  102947 cri.go:89] found id: "d1c181558b58c3ebc7dae5df97b80119a8c6d18dc19b0532b61999c4db0c6668"
	I0919 22:40:13.476209  102947 cri.go:89] found id: "ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93"
	I0919 22:40:13.476214  102947 cri.go:89] found id: "1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6"
	I0919 22:40:13.476221  102947 cri.go:89] found id: "f52d2d9f5881b5d50f95a3aeef2c876d51dd6b2be6c464a7464b26e8175b0fc6"
	I0919 22:40:13.476232  102947 cri.go:89] found id: "365cc00c2e009eeed7e71d1202a4d406c12f0d9faee38762ba691eb2d7c71f89"
	I0919 22:40:13.476255  102947 cri.go:89] found id: "bd9e41958ffbbb27dab3d180a56fb27df36a1a1896db3a6f322a8aabcda57677"
	I0919 22:40:13.476262  102947 cri.go:89] found id: "456a0c3cbf5ce028b9cbac658728c1fee13ad8e2659bfa0c625cd685d711c708"
	I0919 22:40:13.476267  102947 cri.go:89] found id: "05ab0247624a7f8ffa6bc948e3abc3adc49911c297291eb0a6dd42e3df39f4cd"
	I0919 22:40:13.476272  102947 cri.go:89] found id: "e5c59a6abe97751de42afd27010936d1c3a401fad6cd730e75a1692a895b4fbc"
	I0919 22:40:13.476278  102947 cri.go:89] found id: "e80d65e3c7c18da87f7fa003e39382f5a4285ba4782fc295197421c6b882a161"
	I0919 22:40:13.476285  102947 cri.go:89] found id: ""
	I0919 22:40:13.476358  102947 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0919 22:40:13.511540  102947 cri.go:116] JSON = [{"ociVersion":"1.2.0","id":"35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92","pid":903,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92/rootfs","created":"2025-09-19T22:40:13.265497632Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-ha-326307_57c850ed4c5abebc96f109c9dc04f98c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-ha-3263
07","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"57c850ed4c5abebc96f109c9dc04f98c"},"owner":"root"},{"ociVersion":"1.2.0","id":"4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f","pid":851,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f/rootfs","created":"2025-09-19T22:40:13.237289545Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-ha-326307_f6c96a149704fe94a8f3f9671ba1a8ff","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f6c96a149704fe94a8f3f9671ba1a8ff"},"owner":"root"},{"ociVersion":"1.2.0","id":"63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284","pid":1109,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284/rootfs","created":"2025-09-19T22:40:13.452193435Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri.sandbox-id":"b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248","io.kubernetes.cri.sandbox-name":"kube-scheduler-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-s
ystem","io.kubernetes.cri.sandbox-uid":"02be84f36b44ed11e0db130395870414"},"owner":"root"},{"ociVersion":"1.2.0","id":"7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c","pid":1081,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c/rootfs","created":"2025-09-19T22:40:13.445726517Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri.sandbox-id":"35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92","io.kubernetes.cri.sandbox-name":"kube-controller-manager-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"57c850ed4c5abebc96f109c9dc04f98c"},"owner":"r
oot"},{"ociVersion":"1.2.0","id":"8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d","pid":926,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d/rootfs","created":"2025-09-19T22:40:13.291697374Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-vip-ha-326307_11fc7e0ddcb5f54efe3aa73e9d205abc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-vip-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-ui
d":"11fc7e0ddcb5f54efe3aa73e9d205abc"},"owner":"root"},{"ociVersion":"1.2.0","id":"83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad","pid":1117,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad/rootfs","created":"2025-09-19T22:40:13.459929825Z","annotations":{"io.kubernetes.cri.container-name":"kube-vip","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri.sandbox-id":"8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d","io.kubernetes.cri.sandbox-name":"kube-vip-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"11fc7e0ddcb5f54efe3aa73e9d205abc"},"owner":"root"},{"ociVersion":"1.2.0","id":"a85600718119d4e698fdb29d033016634be8e463e4c29a4
b1f9b6778b83c3910","pid":850,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910/rootfs","created":"2025-09-19T22:40:13.246511214Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-ha-326307_044bbdcbe96821df073716c7f05fb17d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"044bbdcbe96821df073716c7f05fb17d"},"owner":"root"},{"ociVersion":"1.2.0","id":"b84e
223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248","pid":911,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248/rootfs","created":"2025-09-19T22:40:13.280883406Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-ha-326307_02be84f36b44ed11e0db130395870414","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"02be84f36b44ed11e0db
130395870414"},"owner":"root"},{"ociVersion":"1.2.0","id":"c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6","pid":1090,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6/rootfs","created":"2025-09-19T22:40:13.443035858Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910","io.kubernetes.cri.sandbox-name":"etcd-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"044bbdcbe96821df073716c7f05fb17d"},"owner":"root"},{"ociVersion":"1.2.0","id":"e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5","pid":1007,"statu
s":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5/rootfs","created":"2025-09-19T22:40:13.41525993Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri.sandbox-id":"4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f","io.kubernetes.cri.sandbox-name":"kube-apiserver-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f6c96a149704fe94a8f3f9671ba1a8ff"},"owner":"root"}]
	I0919 22:40:13.511763  102947 cri.go:126] list returned 10 containers
	I0919 22:40:13.511789  102947 cri.go:129] container: {ID:35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92 Status:running}
	I0919 22:40:13.511829  102947 cri.go:131] skipping 35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92 - not in ps
	I0919 22:40:13.511840  102947 cri.go:129] container: {ID:4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f Status:running}
	I0919 22:40:13.511848  102947 cri.go:131] skipping 4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f - not in ps
	I0919 22:40:13.511854  102947 cri.go:129] container: {ID:63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284 Status:running}
	I0919 22:40:13.511864  102947 cri.go:135] skipping {63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284 running}: state = "running", want "paused"
	I0919 22:40:13.511877  102947 cri.go:129] container: {ID:7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c Status:running}
	I0919 22:40:13.511890  102947 cri.go:135] skipping {7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c running}: state = "running", want "paused"
	I0919 22:40:13.511898  102947 cri.go:129] container: {ID:8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d Status:running}
	I0919 22:40:13.511910  102947 cri.go:131] skipping 8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d - not in ps
	I0919 22:40:13.511916  102947 cri.go:129] container: {ID:83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad Status:running}
	I0919 22:40:13.511925  102947 cri.go:135] skipping {83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad running}: state = "running", want "paused"
	I0919 22:40:13.511935  102947 cri.go:129] container: {ID:a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910 Status:running}
	I0919 22:40:13.511941  102947 cri.go:131] skipping a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910 - not in ps
	I0919 22:40:13.511946  102947 cri.go:129] container: {ID:b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248 Status:running}
	I0919 22:40:13.511951  102947 cri.go:131] skipping b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248 - not in ps
	I0919 22:40:13.511957  102947 cri.go:129] container: {ID:c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6 Status:running}
	I0919 22:40:13.511969  102947 cri.go:135] skipping {c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6 running}: state = "running", want "paused"
	I0919 22:40:13.511976  102947 cri.go:129] container: {ID:e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5 Status:running}
	I0919 22:40:13.511988  102947 cri.go:135] skipping {e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5 running}: state = "running", want "paused"
	I0919 22:40:13.512041  102947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:40:13.524546  102947 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:40:13.524567  102947 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:40:13.524627  102947 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:40:13.537544  102947 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:40:13.538084  102947 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-326307" does not appear in /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:40:13.538273  102947 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14678/kubeconfig needs updating (will repair): [kubeconfig missing "ha-326307" cluster setting kubeconfig missing "ha-326307" context setting]
	I0919 22:40:13.538666  102947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:13.539452  102947 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:40:13.540084  102947 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:40:13.540104  102947 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:40:13.540111  102947 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:40:13.540118  102947 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:40:13.540125  102947 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:40:13.540609  102947 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:40:13.540743  102947 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:40:13.555466  102947 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:40:13.555575  102947 kubeadm.go:593] duration metric: took 31.000137ms to restartPrimaryControlPlane
	I0919 22:40:13.555603  102947 kubeadm.go:394] duration metric: took 143.274252ms to StartCluster
	I0919 22:40:13.555651  102947 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:13.555800  102947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:40:13.556731  102947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:13.557204  102947 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:40:13.557402  102947 start.go:241] waiting for startup goroutines ...
	I0919 22:40:13.557267  102947 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:40:13.557510  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:13.561726  102947 out.go:179] * Enabled addons: 
	I0919 22:40:13.563479  102947 addons.go:514] duration metric: took 6.21303ms for enable addons: enabled=[]
	I0919 22:40:13.563535  102947 start.go:246] waiting for cluster config update ...
	I0919 22:40:13.563548  102947 start.go:255] writing updated cluster config ...
	I0919 22:40:13.565943  102947 out.go:203] 
	I0919 22:40:13.568105  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:13.568246  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:13.570538  102947 out.go:179] * Starting "ha-326307-m02" control-plane node in "ha-326307" cluster
	I0919 22:40:13.572566  102947 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:40:13.574955  102947 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:40:13.576797  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:13.576835  102947 cache.go:58] Caching tarball of preloaded images
	I0919 22:40:13.576935  102947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:40:13.576982  102947 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:40:13.576999  102947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:40:13.577147  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:13.603282  102947 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:40:13.603304  102947 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:40:13.603323  102947 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:40:13.603356  102947 start.go:360] acquireMachinesLock for ha-326307-m02: {Name:mk4919a9b19250804b0f53d01bcd11efaf9a431f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:40:13.603419  102947 start.go:364] duration metric: took 47.152µs to acquireMachinesLock for "ha-326307-m02"
	I0919 22:40:13.603445  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:40:13.603459  102947 fix.go:54] fixHost starting: m02
	I0919 22:40:13.603697  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:40:13.626324  102947 fix.go:112] recreateIfNeeded on ha-326307-m02: state=Stopped err=<nil>
	W0919 22:40:13.626352  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:40:13.629640  102947 out.go:252] * Restarting existing docker container for "ha-326307-m02" ...
	I0919 22:40:13.629728  102947 cli_runner.go:164] Run: docker start ha-326307-m02
	I0919 22:40:13.926841  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:40:13.950131  102947 kic.go:430] container "ha-326307-m02" state is running.
	I0919 22:40:13.950515  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:40:13.973194  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:13.973503  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:40:13.973577  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:13.996029  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:13.996469  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32824 <nil> <nil>}
	I0919 22:40:13.996495  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:40:13.997409  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55282->127.0.0.1:32824: read: connection reset by peer
	I0919 22:40:17.135269  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:40:17.135298  102947 ubuntu.go:182] provisioning hostname "ha-326307-m02"
	I0919 22:40:17.135359  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:17.155772  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:17.156086  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32824 <nil> <nil>}
	I0919 22:40:17.156103  102947 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m02 && echo "ha-326307-m02" | sudo tee /etc/hostname
	I0919 22:40:17.308282  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:40:17.308354  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:17.329394  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:17.329602  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32824 <nil> <nil>}
	I0919 22:40:17.329620  102947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:40:17.469105  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:40:17.469136  102947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:40:17.469173  102947 ubuntu.go:190] setting up certificates
	I0919 22:40:17.469188  102947 provision.go:84] configureAuth start
	I0919 22:40:17.469243  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:40:17.489456  102947 provision.go:143] copyHostCerts
	I0919 22:40:17.489512  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:17.489551  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:40:17.489560  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:17.489629  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:40:17.489711  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:17.489728  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:40:17.489735  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:17.489771  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:40:17.489846  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:17.489864  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:40:17.489870  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:17.489896  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:40:17.489952  102947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m02 san=[127.0.0.1 192.168.49.3 ha-326307-m02 localhost minikube]
	I0919 22:40:17.687121  102947 provision.go:177] copyRemoteCerts
	I0919 22:40:17.687196  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:40:17.687230  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:17.706618  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:17.805482  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:40:17.805552  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:40:17.834469  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:40:17.834533  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:40:17.862491  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:40:17.862578  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:40:17.891048  102947 provision.go:87] duration metric: took 421.847088ms to configureAuth
	I0919 22:40:17.891077  102947 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:40:17.891323  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:17.891337  102947 machine.go:96] duration metric: took 3.917817402s to provisionDockerMachine
	I0919 22:40:17.891348  102947 start.go:293] postStartSetup for "ha-326307-m02" (driver="docker")
	I0919 22:40:17.891362  102947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:40:17.891426  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:40:17.891475  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:17.911877  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:18.017574  102947 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:40:18.021564  102947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:40:18.021608  102947 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:40:18.021620  102947 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:40:18.021627  102947 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:40:18.021641  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:40:18.021732  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:40:18.021827  102947 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:40:18.021845  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:40:18.021965  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:40:18.037625  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:18.072355  102947 start.go:296] duration metric: took 180.992211ms for postStartSetup
	I0919 22:40:18.072434  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:40:18.072488  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:18.097080  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:18.200976  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:40:18.207724  102947 fix.go:56] duration metric: took 4.604261714s for fixHost
	I0919 22:40:18.207752  102947 start.go:83] releasing machines lock for "ha-326307-m02", held for 4.604318809s
	I0919 22:40:18.207819  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:40:18.233683  102947 out.go:179] * Found network options:
	I0919 22:40:18.235326  102947 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:40:18.236979  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:40:18.237024  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:40:18.237101  102947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:40:18.237148  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:18.237186  102947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:40:18.237248  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:18.262883  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:18.265825  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:18.472261  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:40:18.501316  102947 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:40:18.501403  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:40:18.517881  102947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:40:18.517907  102947 start.go:495] detecting cgroup driver to use...
	I0919 22:40:18.517943  102947 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:40:18.518009  102947 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:40:18.540215  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:40:18.558468  102947 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:40:18.558538  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:40:18.578938  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:40:18.606098  102947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:40:18.738984  102947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:40:18.861135  102947 docker.go:234] disabling docker service ...
	I0919 22:40:18.861295  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:40:18.889797  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:40:18.903559  102947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:40:19.020834  102947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:40:19.210102  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:40:19.253298  102947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:40:19.294451  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:40:19.314809  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:40:19.329896  102947 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:40:19.329968  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:40:19.344499  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:19.359934  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:40:19.375426  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:19.390525  102947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:40:19.405742  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:40:19.419676  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:40:19.433744  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:40:19.447497  102947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:40:19.459701  102947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:40:19.472280  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:19.590393  102947 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:40:19.844194  102947 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:40:19.844268  102947 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:40:19.848691  102947 start.go:563] Will wait 60s for crictl version
	I0919 22:40:19.848750  102947 ssh_runner.go:195] Run: which crictl
	I0919 22:40:19.852912  102947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:40:19.896612  102947 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:40:19.896665  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:19.922108  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:19.951040  102947 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:40:19.952600  102947 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:40:19.954094  102947 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:40:19.972221  102947 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:40:19.976367  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:19.988586  102947 mustload.go:65] Loading cluster: ha-326307
	I0919 22:40:19.988826  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:19.989048  102947 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:40:20.009691  102947 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:40:20.009938  102947 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.3
	I0919 22:40:20.009958  102947 certs.go:194] generating shared ca certs ...
	I0919 22:40:20.009977  102947 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:20.010097  102947 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:40:20.010186  102947 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:40:20.010200  102947 certs.go:256] generating profile certs ...
	I0919 22:40:20.010274  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:40:20.010317  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.d9fee4c2
	I0919 22:40:20.010351  102947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:40:20.010361  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:40:20.010388  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:40:20.010403  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:40:20.010415  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:40:20.010427  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:40:20.010440  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:40:20.010451  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:40:20.010463  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:40:20.010507  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:40:20.010541  102947 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:40:20.010552  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:40:20.010572  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:40:20.010593  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:40:20.010613  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:40:20.010656  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:20.010681  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:40:20.010696  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:20.010706  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:40:20.010750  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:20.034999  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:20.130696  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:40:20.137701  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:40:20.181406  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:40:20.188123  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:40:20.209898  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:40:20.217560  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:40:20.265391  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:40:20.271849  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:40:20.306378  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:40:20.313419  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:40:20.338279  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:40:20.344910  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:40:20.368606  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:40:20.417189  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:40:20.473868  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:40:20.554542  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:40:20.629092  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:40:20.678888  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:40:20.722550  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:40:20.778639  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:40:20.828112  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:40:20.884904  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:40:20.936206  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:40:20.979746  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:40:21.011968  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:40:21.037922  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:40:21.058425  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:40:21.078533  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:40:21.099029  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:40:21.125522  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:40:21.151265  102947 ssh_runner.go:195] Run: openssl version
	I0919 22:40:21.157938  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:40:21.169944  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:40:21.174243  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:40:21.174339  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:40:21.182194  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:40:21.195623  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:40:21.210343  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:21.216012  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:21.216080  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:21.226359  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:40:21.239970  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:40:21.256305  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:40:21.263490  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:40:21.263550  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:40:21.274306  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:40:21.289549  102947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:40:21.294844  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:40:21.305190  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:40:21.317466  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:40:21.327473  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:40:21.337404  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:40:21.346840  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:40:21.355241  102947 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0919 22:40:21.355365  102947 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:40:21.355400  102947 kube-vip.go:115] generating kube-vip config ...
	I0919 22:40:21.355447  102947 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:40:21.372568  102947 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:40:21.372652  102947 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:40:21.372715  102947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:40:21.385812  102947 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:40:21.385902  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:40:21.396920  102947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:40:21.418422  102947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:40:21.441221  102947 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:40:21.461293  102947 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:40:21.465499  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:21.479394  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:21.609276  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:21.625324  102947 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:40:21.625678  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:21.627937  102947 out.go:179] * Verifying Kubernetes components...
	I0919 22:40:21.629432  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:21.754519  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:21.770966  102947 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:40:21.771034  102947 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:40:21.771308  102947 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m02" to be "Ready" ...
	I0919 22:40:21.780317  102947 node_ready.go:49] node "ha-326307-m02" is "Ready"
	I0919 22:40:21.780344  102947 node_ready.go:38] duration metric: took 9.008043ms for node "ha-326307-m02" to be "Ready" ...
	I0919 22:40:21.780357  102947 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:40:21.780412  102947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:40:21.794097  102947 api_server.go:72] duration metric: took 168.727042ms to wait for apiserver process to appear ...
	I0919 22:40:21.794124  102947 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:40:21.794147  102947 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:40:21.800333  102947 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:40:21.801474  102947 api_server.go:141] control plane version: v1.34.0
	I0919 22:40:21.801509  102947 api_server.go:131] duration metric: took 7.377354ms to wait for apiserver health ...
	I0919 22:40:21.801520  102947 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:40:21.810182  102947 system_pods.go:59] 24 kube-system pods found
	I0919 22:40:21.810226  102947 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:21.810244  102947 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:21.810254  102947 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:21.810262  102947 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:21.810268  102947 system_pods.go:61] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running
	I0919 22:40:21.810276  102947 system_pods.go:61] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:40:21.810281  102947 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:40:21.810292  102947 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:40:21.810300  102947 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:21.810311  102947 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:21.810315  102947 system_pods.go:61] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running
	I0919 22:40:21.810325  102947 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:21.810332  102947 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:21.810336  102947 system_pods.go:61] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running
	I0919 22:40:21.810340  102947 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:40:21.810344  102947 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:40:21.810348  102947 system_pods.go:61] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:40:21.810353  102947 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:21.810361  102947 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:21.810365  102947 system_pods.go:61] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running
	I0919 22:40:21.810369  102947 system_pods.go:61] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:40:21.810372  102947 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:40:21.810375  102947 system_pods.go:61] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:40:21.810378  102947 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:40:21.810383  102947 system_pods.go:74] duration metric: took 8.856915ms to wait for pod list to return data ...
	I0919 22:40:21.810390  102947 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:40:21.813818  102947 default_sa.go:45] found service account: "default"
	I0919 22:40:21.813853  102947 default_sa.go:55] duration metric: took 3.458375ms for default service account to be created ...
	I0919 22:40:21.813864  102947 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:40:21.820987  102947 system_pods.go:86] 24 kube-system pods found
	I0919 22:40:21.821019  102947 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:21.821027  102947 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:21.821034  102947 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:21.821040  102947 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:21.821044  102947 system_pods.go:89] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running
	I0919 22:40:21.821048  102947 system_pods.go:89] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:40:21.821051  102947 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:40:21.821054  102947 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:40:21.821059  102947 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:21.821064  102947 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:21.821068  102947 system_pods.go:89] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running
	I0919 22:40:21.821074  102947 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:21.821079  102947 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:21.821083  102947 system_pods.go:89] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running
	I0919 22:40:21.821087  102947 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:40:21.821090  102947 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:40:21.821095  102947 system_pods.go:89] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:40:21.821100  102947 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:21.821107  102947 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:21.821114  102947 system_pods.go:89] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running
	I0919 22:40:21.821118  102947 system_pods.go:89] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:40:21.821121  102947 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:40:21.821124  102947 system_pods.go:89] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:40:21.821127  102947 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:40:21.821133  102947 system_pods.go:126] duration metric: took 7.263023ms to wait for k8s-apps to be running ...
	I0919 22:40:21.821142  102947 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:40:21.821209  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:40:21.835069  102947 system_svc.go:56] duration metric: took 13.918083ms WaitForService to wait for kubelet
	I0919 22:40:21.835096  102947 kubeadm.go:578] duration metric: took 209.729975ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:40:21.835114  102947 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:40:21.839112  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:21.839140  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:21.839183  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:21.839191  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:21.839198  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:21.839203  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:21.839208  102947 node_conditions.go:105] duration metric: took 4.090003ms to run NodePressure ...
	I0919 22:40:21.839223  102947 start.go:241] waiting for startup goroutines ...
	I0919 22:40:21.839260  102947 start.go:255] writing updated cluster config ...
	I0919 22:40:21.841908  102947 out.go:203] 
	I0919 22:40:21.843889  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:21.844011  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:21.846125  102947 out.go:179] * Starting "ha-326307-m03" control-plane node in "ha-326307" cluster
	I0919 22:40:21.848304  102947 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:40:21.850127  102947 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:40:21.851602  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:21.851635  102947 cache.go:58] Caching tarball of preloaded images
	I0919 22:40:21.851746  102947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:40:21.851778  102947 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:40:21.851789  102947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:40:21.851912  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:21.876321  102947 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:40:21.876341  102947 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:40:21.876357  102947 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:40:21.876378  102947 start.go:360] acquireMachinesLock for ha-326307-m03: {Name:mk07818636650a6efffb19d787e84a34d6f1dd98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:40:21.876432  102947 start.go:364] duration metric: took 39.311µs to acquireMachinesLock for "ha-326307-m03"
	I0919 22:40:21.876450  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:40:21.876473  102947 fix.go:54] fixHost starting: m03
	I0919 22:40:21.876688  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:40:21.896238  102947 fix.go:112] recreateIfNeeded on ha-326307-m03: state=Stopped err=<nil>
	W0919 22:40:21.896264  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:40:21.898402  102947 out.go:252] * Restarting existing docker container for "ha-326307-m03" ...
	I0919 22:40:21.898493  102947 cli_runner.go:164] Run: docker start ha-326307-m03
	I0919 22:40:22.169027  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:40:22.190097  102947 kic.go:430] container "ha-326307-m03" state is running.
	I0919 22:40:22.190500  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:40:22.212272  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:22.212572  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:40:22.212637  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:22.233877  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:22.234093  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32829 <nil> <nil>}
	I0919 22:40:22.234104  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:40:22.234859  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37302->127.0.0.1:32829: read: connection reset by peer
	I0919 22:40:25.378797  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:40:25.378831  102947 ubuntu.go:182] provisioning hostname "ha-326307-m03"
	I0919 22:40:25.378898  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:25.414501  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:25.414938  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32829 <nil> <nil>}
	I0919 22:40:25.415073  102947 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m03 && echo "ha-326307-m03" | sudo tee /etc/hostname
	I0919 22:40:25.588850  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:40:25.588948  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:25.610247  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:25.610522  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32829 <nil> <nil>}
	I0919 22:40:25.610550  102947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:40:25.754732  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:40:25.754765  102947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:40:25.754794  102947 ubuntu.go:190] setting up certificates
	I0919 22:40:25.754806  102947 provision.go:84] configureAuth start
	I0919 22:40:25.754866  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:40:25.775758  102947 provision.go:143] copyHostCerts
	I0919 22:40:25.775814  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:25.775859  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:40:25.775876  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:25.775969  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:40:25.776130  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:25.776178  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:40:25.776185  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:25.776236  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:40:25.776312  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:25.776338  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:40:25.776347  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:25.776387  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:40:25.776465  102947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m03 san=[127.0.0.1 192.168.49.4 ha-326307-m03 localhost minikube]
	I0919 22:40:25.957556  102947 provision.go:177] copyRemoteCerts
	I0919 22:40:25.957614  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:40:25.957661  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:25.977125  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.075851  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:40:26.075925  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:40:26.103453  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:40:26.103525  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:40:26.130922  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:40:26.130993  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:40:26.158446  102947 provision.go:87] duration metric: took 403.627341ms to configureAuth
	I0919 22:40:26.158474  102947 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:40:26.158684  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:26.158696  102947 machine.go:96] duration metric: took 3.94610996s to provisionDockerMachine
	I0919 22:40:26.158706  102947 start.go:293] postStartSetup for "ha-326307-m03" (driver="docker")
	I0919 22:40:26.158718  102947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:40:26.158769  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:40:26.158815  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:26.177219  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.277051  102947 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:40:26.280902  102947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:40:26.280935  102947 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:40:26.280943  102947 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:40:26.280949  102947 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:40:26.280960  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:40:26.281017  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:40:26.281085  102947 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:40:26.281094  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:40:26.281219  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:40:26.291493  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:26.319669  102947 start.go:296] duration metric: took 160.947592ms for postStartSetup
	I0919 22:40:26.319764  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:40:26.319819  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:26.340008  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.438911  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:40:26.444573  102947 fix.go:56] duration metric: took 4.568092826s for fixHost
	I0919 22:40:26.444606  102947 start.go:83] releasing machines lock for "ha-326307-m03", held for 4.568161658s
	I0919 22:40:26.444685  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:40:26.470387  102947 out.go:179] * Found network options:
	I0919 22:40:26.472070  102947 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:40:26.473856  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:40:26.473888  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:40:26.473917  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:40:26.473931  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:40:26.474012  102947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:40:26.474058  102947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:40:26.474062  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:26.474114  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:26.500808  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.503237  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.708883  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:40:26.738637  102947 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:40:26.738718  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:40:26.752845  102947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:40:26.752872  102947 start.go:495] detecting cgroup driver to use...
	I0919 22:40:26.752907  102947 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:40:26.752955  102947 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:40:26.771737  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:40:26.788372  102947 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:40:26.788434  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:40:26.810086  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:40:26.828338  102947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:40:26.983767  102947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:40:27.150072  102947 docker.go:234] disabling docker service ...
	I0919 22:40:27.150147  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:40:27.173008  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:40:27.193344  102947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:40:27.317738  102947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:40:27.460983  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:40:27.485592  102947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:40:27.507890  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:40:27.520044  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:40:27.534512  102947 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:40:27.534574  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:40:27.548984  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:27.562483  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:40:27.577519  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:27.592117  102947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:40:27.604075  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:40:27.616958  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:40:27.631964  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:40:27.646292  102947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:40:27.658210  102947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:40:27.672336  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:27.803893  102947 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:40:28.062245  102947 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:40:28.062313  102947 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:40:28.066699  102947 start.go:563] Will wait 60s for crictl version
	I0919 22:40:28.066771  102947 ssh_runner.go:195] Run: which crictl
	I0919 22:40:28.071489  102947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:40:28.109371  102947 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:40:28.109444  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:28.135369  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:28.166192  102947 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:40:28.167830  102947 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:40:28.169229  102947 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:40:28.170416  102947 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:40:28.189509  102947 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:40:28.193804  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:28.206515  102947 mustload.go:65] Loading cluster: ha-326307
	I0919 22:40:28.206800  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:28.207069  102947 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:40:28.226787  102947 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:40:28.227094  102947 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.4
	I0919 22:40:28.227201  102947 certs.go:194] generating shared ca certs ...
	I0919 22:40:28.227247  102947 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:28.227424  102947 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:40:28.227487  102947 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:40:28.227504  102947 certs.go:256] generating profile certs ...
	I0919 22:40:28.227586  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:40:28.227634  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604
	I0919 22:40:28.227713  102947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:40:28.227730  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:40:28.227749  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:40:28.227764  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:40:28.227783  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:40:28.227800  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:40:28.227819  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:40:28.227839  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:40:28.227862  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:40:28.227929  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:40:28.227971  102947 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:40:28.227984  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:40:28.228019  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:40:28.228051  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:40:28.228082  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:40:28.228166  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:28.228213  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:40:28.228239  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:40:28.228259  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:28.228383  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:28.247785  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:28.336571  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:40:28.341071  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:40:28.354226  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:40:28.358563  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:40:28.373723  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:40:28.378406  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:40:28.394406  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:40:28.399415  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:40:28.416091  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:40:28.420161  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:40:28.435710  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:40:28.439831  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:40:28.454973  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:40:28.488291  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:40:28.520386  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:40:28.548878  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:40:28.577674  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:40:28.606894  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:40:28.635467  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:40:28.664035  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:40:28.692528  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:40:28.721969  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:40:28.750129  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:40:28.777226  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:40:28.798416  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:40:28.818429  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:40:28.844040  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:40:28.875418  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:40:28.898298  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:40:28.918961  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:40:28.940259  102947 ssh_runner.go:195] Run: openssl version
	I0919 22:40:28.946752  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:40:28.959425  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:28.964456  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:28.964528  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:28.973714  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:40:28.984876  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:40:28.996258  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:40:29.000541  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:40:29.000605  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:40:29.008599  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:40:29.018788  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:40:29.030314  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:40:29.034634  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:40:29.034700  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:40:29.042685  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:40:29.052467  102947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:40:29.056255  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:40:29.063105  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:40:29.071819  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:40:29.079410  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:40:29.086705  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:40:29.094001  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:40:29.101257  102947 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0919 22:40:29.101378  102947 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:40:29.101410  102947 kube-vip.go:115] generating kube-vip config ...
	I0919 22:40:29.101456  102947 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:40:29.115062  102947 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:40:29.115120  102947 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:40:29.115184  102947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:40:29.124866  102947 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:40:29.124920  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:40:29.135111  102947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:40:29.156313  102947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:40:29.177045  102947 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:40:29.198544  102947 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:40:29.203037  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:29.216695  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:29.333585  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:29.349312  102947 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:40:29.349626  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:29.352738  102947 out.go:179] * Verifying Kubernetes components...
	I0919 22:40:29.354445  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:29.474185  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:29.488500  102947 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:40:29.488573  102947 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:40:29.488783  102947 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m03" to be "Ready" ...
	I0919 22:40:29.492092  102947 node_ready.go:49] node "ha-326307-m03" is "Ready"
	I0919 22:40:29.492121  102947 node_ready.go:38] duration metric: took 3.321791ms for node "ha-326307-m03" to be "Ready" ...
	I0919 22:40:29.492134  102947 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:40:29.492205  102947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:40:29.506850  102947 api_server.go:72] duration metric: took 157.484065ms to wait for apiserver process to appear ...
	I0919 22:40:29.506886  102947 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:40:29.506910  102947 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:40:29.511130  102947 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:40:29.512015  102947 api_server.go:141] control plane version: v1.34.0
	I0919 22:40:29.512036  102947 api_server.go:131] duration metric: took 5.141712ms to wait for apiserver health ...
	I0919 22:40:29.512043  102947 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:40:29.518744  102947 system_pods.go:59] 24 kube-system pods found
	I0919 22:40:29.518774  102947 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:29.518782  102947 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:29.518787  102947 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:40:29.518791  102947 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:40:29.518796  102947 system_pods.go:61] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:29.518800  102947 system_pods.go:61] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:40:29.518804  102947 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:40:29.518807  102947 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:40:29.518810  102947 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:40:29.518813  102947 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:40:29.518819  102947 system_pods.go:61] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:29.518822  102947 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:40:29.518828  102947 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:29.518858  102947 system_pods.go:61] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:29.518862  102947 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:40:29.518868  102947 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:40:29.518873  102947 system_pods.go:61] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:40:29.518879  102947 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:40:29.518884  102947 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:29.518888  102947 system_pods.go:61] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:29.518894  102947 system_pods.go:61] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:40:29.518897  102947 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:40:29.518900  102947 system_pods.go:61] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:40:29.518905  102947 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:40:29.518910  102947 system_pods.go:74] duration metric: took 6.861836ms to wait for pod list to return data ...
	I0919 22:40:29.518919  102947 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:40:29.521697  102947 default_sa.go:45] found service account: "default"
	I0919 22:40:29.521719  102947 default_sa.go:55] duration metric: took 2.795273ms for default service account to be created ...
	I0919 22:40:29.521728  102947 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:40:29.527102  102947 system_pods.go:86] 24 kube-system pods found
	I0919 22:40:29.527136  102947 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:29.527144  102947 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:29.527166  102947 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:40:29.527174  102947 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:40:29.527181  102947 system_pods.go:89] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:29.527186  102947 system_pods.go:89] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:40:29.527195  102947 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:40:29.527200  102947 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:40:29.527209  102947 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:40:29.527214  102947 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:40:29.527224  102947 system_pods.go:89] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:29.527233  102947 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:40:29.527244  102947 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:29.527251  102947 system_pods.go:89] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:29.527259  102947 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:40:29.527265  102947 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:40:29.527274  102947 system_pods.go:89] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:40:29.527282  102947 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:40:29.527293  102947 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:29.527304  102947 system_pods.go:89] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:29.527311  102947 system_pods.go:89] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:40:29.527318  102947 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:40:29.527326  102947 system_pods.go:89] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:40:29.527331  102947 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:40:29.527342  102947 system_pods.go:126] duration metric: took 5.60777ms to wait for k8s-apps to be running ...
	I0919 22:40:29.527353  102947 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:40:29.527418  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:40:29.540084  102947 system_svc.go:56] duration metric: took 12.720236ms WaitForService to wait for kubelet
	I0919 22:40:29.540114  102947 kubeadm.go:578] duration metric: took 190.753677ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:40:29.540138  102947 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:40:29.543938  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:29.543961  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:29.543977  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:29.543981  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:29.543985  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:29.543988  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:29.543992  102947 node_conditions.go:105] duration metric: took 3.848698ms to run NodePressure ...
	I0919 22:40:29.544002  102947 start.go:241] waiting for startup goroutines ...
	I0919 22:40:29.544021  102947 start.go:255] writing updated cluster config ...
	I0919 22:40:29.546124  102947 out.go:203] 
	I0919 22:40:29.547729  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:29.547827  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:29.549464  102947 out.go:179] * Starting "ha-326307-m04" worker node in "ha-326307" cluster
	I0919 22:40:29.551423  102947 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:40:29.552959  102947 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:40:29.554347  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:29.554374  102947 cache.go:58] Caching tarball of preloaded images
	I0919 22:40:29.554466  102947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:40:29.554528  102947 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:40:29.554544  102947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:40:29.554661  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:29.576604  102947 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:40:29.576623  102947 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:40:29.576636  102947 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:40:29.576658  102947 start.go:360] acquireMachinesLock for ha-326307-m04: {Name:mk65e4f546dae46b7c7a6cfe6f590e09a0a01676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:40:29.576722  102947 start.go:364] duration metric: took 36.867µs to acquireMachinesLock for "ha-326307-m04"
	I0919 22:40:29.576740  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:40:29.576747  102947 fix.go:54] fixHost starting: m04
	I0919 22:40:29.576991  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:40:29.599524  102947 fix.go:112] recreateIfNeeded on ha-326307-m04: state=Stopped err=<nil>
	W0919 22:40:29.599554  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:40:29.601341  102947 out.go:252] * Restarting existing docker container for "ha-326307-m04" ...
	I0919 22:40:29.601436  102947 cli_runner.go:164] Run: docker start ha-326307-m04
	I0919 22:40:29.856928  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:40:29.877141  102947 kic.go:430] container "ha-326307-m04" state is running.
	I0919 22:40:29.877564  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m04
	I0919 22:40:29.898099  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:29.898353  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:40:29.898408  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	I0919 22:40:29.919242  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:29.919493  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32834 <nil> <nil>}
	I0919 22:40:29.919509  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:40:29.920238  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53392->127.0.0.1:32834: read: connection reset by peer
	I0919 22:40:32.921592  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:35.923978  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:38.925460  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:41.925968  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:44.927435  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:47.928879  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:50.930439  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:53.931750  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:56.932223  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:59.933541  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:02.934449  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:05.936468  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:08.938720  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:11.939132  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:14.940311  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:17.941338  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:20.943720  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:23.944321  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:26.945127  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:29.946482  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:32.947311  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:35.949504  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:38.950829  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:41.951282  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:44.951718  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:47.952886  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:50.954501  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:53.955026  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:56.955566  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:59.956458  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:02.958263  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:05.960452  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:08.960827  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:11.961991  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:14.963364  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:17.964467  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:20.966794  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:23.967257  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:26.968419  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:29.969450  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:32.970449  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:35.972383  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:38.974402  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:41.974947  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:44.975961  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:47.977119  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:50.979045  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:53.979535  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:56.980106  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:59.981632  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:02.983145  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:05.985114  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:08.987742  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:11.988246  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:14.988636  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:17.990247  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:20.990690  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:23.991025  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:26.992363  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:29.994267  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:43:29.994298  102947 ubuntu.go:182] provisioning hostname "ha-326307-m04"
	I0919 22:43:29.994384  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:30.014799  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:30.014894  102947 machine.go:96] duration metric: took 3m0.116525554s to provisionDockerMachine
	I0919 22:43:30.014980  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:43:30.015024  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:30.033859  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:30.033976  102947 retry.go:31] will retry after 180.600333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:30.215391  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:30.234687  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:30.234800  102947 retry.go:31] will retry after 396.872897ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:30.632462  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:30.651421  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:30.651553  102947 retry.go:31] will retry after 330.021621ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:30.982141  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:31.001874  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:31.001981  102947 retry.go:31] will retry after 902.78257ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:31.905550  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:31.924562  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:43:31.924688  102947 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:31.924702  102947 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:31.924747  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:43:31.924776  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:31.944532  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:31.944644  102947 retry.go:31] will retry after 370.439297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:32.316311  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:32.335705  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:32.335801  102947 retry.go:31] will retry after 471.735503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:32.808402  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:32.828725  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:32.828845  102947 retry.go:31] will retry after 653.918581ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:33.483771  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:33.505126  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:43:33.505274  102947 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:33.505310  102947 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:33.505321  102947 fix.go:56] duration metric: took 3m3.928573811s for fixHost
	I0919 22:43:33.505333  102947 start.go:83] releasing machines lock for "ha-326307-m04", held for 3m3.928601896s
	W0919 22:43:33.505353  102947 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:33.505432  102947 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:33.505457  102947 start.go:729] Will try again in 5 seconds ...
	I0919 22:43:38.507265  102947 start.go:360] acquireMachinesLock for ha-326307-m04: {Name:mk65e4f546dae46b7c7a6cfe6f590e09a0a01676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:43:38.507371  102947 start.go:364] duration metric: took 72.258µs to acquireMachinesLock for "ha-326307-m04"
	I0919 22:43:38.507394  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:43:38.507402  102947 fix.go:54] fixHost starting: m04
	I0919 22:43:38.507660  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:43:38.526017  102947 fix.go:112] recreateIfNeeded on ha-326307-m04: state=Stopped err=<nil>
	W0919 22:43:38.526047  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:43:38.528104  102947 out.go:252] * Restarting existing docker container for "ha-326307-m04" ...
	I0919 22:43:38.528195  102947 cli_runner.go:164] Run: docker start ha-326307-m04
	I0919 22:43:38.792918  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:43:38.812750  102947 kic.go:430] container "ha-326307-m04" state is running.
	I0919 22:43:38.813122  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m04
	I0919 22:43:38.835015  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:43:38.835331  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:43:38.835404  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	I0919 22:43:38.855863  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:38.856092  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32839 <nil> <nil>}
	I0919 22:43:38.856104  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:43:38.856765  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33486->127.0.0.1:32839: read: connection reset by peer
	I0919 22:43:41.857087  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:44.857460  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:47.858230  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:50.860407  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:53.860840  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:56.862141  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:59.863585  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:02.864745  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:05.867376  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:08.869862  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:11.870894  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:14.871487  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:17.872736  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:20.874506  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:23.875596  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:26.875979  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:29.877435  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:32.878977  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:35.881595  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:38.883657  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:41.884099  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:44.885281  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:47.887113  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:50.889449  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:53.889898  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:56.891131  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:59.893426  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:02.895108  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:05.896902  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:08.899087  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:11.900184  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:14.901096  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:17.902201  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:20.904503  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:23.904962  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:26.906198  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:29.908575  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:32.910119  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:35.912526  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:38.914521  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:41.915090  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:44.916505  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:47.917924  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:50.919469  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:53.919814  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:56.920315  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:59.922657  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:02.924190  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:05.926504  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:08.928432  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:11.929228  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:14.930499  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:17.931536  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:20.934030  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:23.934965  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:26.936258  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:29.938459  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:32.939438  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:35.941457  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:38.943814  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:46:38.943857  102947 ubuntu.go:182] provisioning hostname "ha-326307-m04"
	I0919 22:46:38.943941  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:38.964275  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:38.964337  102947 machine.go:96] duration metric: took 3m0.128991371s to provisionDockerMachine
	I0919 22:46:38.964416  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:46:38.964451  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:38.983816  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:38.983960  102947 retry.go:31] will retry after 364.420464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:39.349386  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:39.369081  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:39.369225  102947 retry.go:31] will retry after 206.788026ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:39.576720  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:39.596502  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:39.596609  102947 retry.go:31] will retry after 511.892744ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:40.109367  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:40.129534  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:40.129648  102947 retry.go:31] will retry after 811.778179ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:40.941718  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:40.962501  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:46:40.962610  102947 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:46:40.962628  102947 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:40.962672  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:46:40.962701  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:40.983319  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:40.983479  102947 retry.go:31] will retry after 310.783714ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:41.295059  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:41.314519  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:41.314654  102947 retry.go:31] will retry after 532.410728ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:41.847306  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:41.866776  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:41.866902  102947 retry.go:31] will retry after 498.480272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:42.366422  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:42.388450  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:46:42.388595  102947 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:46:42.388613  102947 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:42.388623  102947 fix.go:56] duration metric: took 3m3.881222347s for fixHost
	I0919 22:46:42.388631  102947 start.go:83] releasing machines lock for "ha-326307-m04", held for 3m3.881250201s
	W0919 22:46:42.388708  102947 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-326307" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:42.391386  102947 out.go:203] 
	W0919 22:46:42.393146  102947 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:46:42.393190  102947 out.go:285] * 
	W0919 22:46:42.395039  102947 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 22:46:42.396646  102947 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb7d0d80b9c23       6e38f40d628db       5 minutes ago       Running             storage-provisioner       2                   a66e01a465731       storage-provisioner
	fea1c0534d95d       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   c6c63e662186b       kindnet-gxnzs
	fff949799c16f       52546a367cc9e       6 minutes ago       Running             coredns                   1                   d66fcc49f8eef       coredns-66bc5c9577-wqvzd
	9b01ee2966e08       52546a367cc9e       6 minutes ago       Running             coredns                   1                   8915a954c3a5e       coredns-66bc5c9577-9j5pw
	471e8ec48d678       8c811b4aec35f       6 minutes ago       Running             busybox                   1                   4242a65c0c92e       busybox-7b57f96db7-m8swj
	a7d6081c4523a       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       1                   a66e01a465731       storage-provisioner
	c1e4cc3b9a7f1       df0860106674d       6 minutes ago       Running             kube-proxy                1                   bb87d6f8210e1       kube-proxy-8kxtv
	83bc1a5b44143       765655ea60781       6 minutes ago       Running             kube-vip                  0                   8124d18d08f1c       kube-vip-ha-326307
	63dc43f0224fa       46169d968e920       6 minutes ago       Running             kube-scheduler            1                   b84e223a297e4       kube-scheduler-ha-326307
	7a855457ed99a       a0af72f2ec6d6       6 minutes ago       Running             kube-controller-manager   1                   35b9028490f76       kube-controller-manager-ha-326307
	c543ffd76b85c       5f1f5298c888d       6 minutes ago       Running             etcd                      1                   a85600718119d       etcd-ha-326307
	e1a181d28b52f       90550c43ad2bc       6 minutes ago       Running             kube-apiserver            1                   4ff7be1cea576       kube-apiserver-ha-326307
	7791f71e5d5a5       8c811b4aec35f       21 minutes ago      Exited              busybox                   0                   b5e0c0fffea25       busybox-7b57f96db7-m8swj
	ca68bbc020e20       52546a367cc9e       22 minutes ago      Exited              coredns                   0                   132023f334782       coredns-66bc5c9577-9j5pw
	1f618dc8f0392       52546a367cc9e       23 minutes ago      Exited              coredns                   0                   a5ac32b4949ab       coredns-66bc5c9577-wqvzd
	365cc00c2e009       409467f978b4a       23 minutes ago      Exited              kindnet-cni               0                   96e027ec2b5fb       kindnet-gxnzs
	bd9e41958ffbb       df0860106674d       23 minutes ago      Exited              kube-proxy                0                   06da62af16945       kube-proxy-8kxtv
	456a0c3cbf5ce       46169d968e920       23 minutes ago      Exited              kube-scheduler            0                   f02b9e82ff9b1       kube-scheduler-ha-326307
	05ab0247624a7       a0af72f2ec6d6       23 minutes ago      Exited              kube-controller-manager   0                   6026f58e8c23a       kube-controller-manager-ha-326307
	e5c59a6abe977       5f1f5298c888d       23 minutes ago      Exited              etcd                      0                   5f89382a468ad       etcd-ha-326307
	e80d65e3c7c18       90550c43ad2bc       23 minutes ago      Exited              kube-apiserver            0                   3813626701bd1       kube-apiserver-ha-326307
	
	
	==> containerd <==
	Sep 19 22:40:50 ha-326307 containerd[478]: time="2025-09-19T22:40:50.496292846Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 19 22:40:50 ha-326307 containerd[478]: time="2025-09-19T22:40:50.941042111Z" level=info msg="RemoveContainer for \"f52d2d9f5881b5d50f95a3aeef2c876d51dd6b2be6c464a7464b26e8175b0fc6\""
	Sep 19 22:40:50 ha-326307 containerd[478]: time="2025-09-19T22:40:50.945894995Z" level=info msg="RemoveContainer for \"f52d2d9f5881b5d50f95a3aeef2c876d51dd6b2be6c464a7464b26e8175b0fc6\" returns successfully"
	Sep 19 22:41:02 ha-326307 containerd[478]: time="2025-09-19T22:41:02.735151860Z" level=info msg="CreateContainer within sandbox \"a66e01a46573183df8e2c6c041cfc03b30ce85291f726f529b293f52bca48ac9\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
	Sep 19 22:41:02 ha-326307 containerd[478]: time="2025-09-19T22:41:02.750197533Z" level=info msg="CreateContainer within sandbox \"a66e01a46573183df8e2c6c041cfc03b30ce85291f726f529b293f52bca48ac9\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"bb7d0d80b9c2303a244cfffc060085bd15528b57546277cdaaaf2dd707b68f1f\""
	Sep 19 22:41:02 ha-326307 containerd[478]: time="2025-09-19T22:41:02.750866519Z" level=info msg="StartContainer for \"bb7d0d80b9c2303a244cfffc060085bd15528b57546277cdaaaf2dd707b68f1f\""
	Sep 19 22:41:02 ha-326307 containerd[478]: time="2025-09-19T22:41:02.809028664Z" level=info msg="StartContainer for \"bb7d0d80b9c2303a244cfffc060085bd15528b57546277cdaaaf2dd707b68f1f\" returns successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.721548399Z" level=info msg="RemoveContainer for \"d1c181558b58c3ebc7dae5df97b80119a8c6d18dc19b0532b61999c4db0c6668\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.726063631Z" level=info msg="RemoveContainer for \"d1c181558b58c3ebc7dae5df97b80119a8c6d18dc19b0532b61999c4db0c6668\" returns successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.728293194Z" level=info msg="StopPodSandbox for \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.728427999Z" level=info msg="TearDown network for sandbox \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\" successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.728450762Z" level=info msg="StopPodSandbox for \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\" returns successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.728930508Z" level=info msg="RemovePodSandbox for \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.728969583Z" level=info msg="Forcibly stopping sandbox \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.729045579Z" level=info msg="TearDown network for sandbox \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\" successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.733274152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.733381747Z" level=info msg="RemovePodSandbox \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\" returns successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734017576Z" level=info msg="StopPodSandbox for \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734138515Z" level=info msg="TearDown network for sandbox \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\" successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734174247Z" level=info msg="StopPodSandbox for \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\" returns successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734599814Z" level=info msg="RemovePodSandbox for \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734628547Z" level=info msg="Forcibly stopping sandbox \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734699211Z" level=info msg="TearDown network for sandbox \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\" successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.738452443Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.738554754Z" level=info msg="RemovePodSandbox \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\" returns successfully"
	
	
	==> coredns [1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6] <==
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54337 - 24572 "HINFO IN 5143313645322175939.5313042790825403134. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069732464s
	[INFO] 10.244.0.4:35490 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000326279s
	[INFO] 10.244.0.4:55330 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.014239882s
	[INFO] 10.244.1.2:39628 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210602s
	[INFO] 10.244.1.2:46891 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.001261026s
	[INFO] 10.244.1.2:43124 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.00098216s
	[INFO] 10.244.1.2:49555 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.00013424s
	[INFO] 10.244.0.4:40362 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00024135s
	[INFO] 10.244.0.4:45629 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168694s
	[INFO] 10.244.1.2:52354 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189457s
	[INFO] 10.244.1.2:43857 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161715s
	[INFO] 10.244.1.2:51922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145764s
	[INFO] 10.244.1.2:57320 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009888497s
	[INFO] 10.244.1.2:49841 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169285s
	[INFO] 10.244.0.4:51548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159656s
	[INFO] 10.244.0.4:48681 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110507s
	[INFO] 10.244.1.2:52993 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137337s
	[INFO] 10.244.0.4:59855 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113524s
	[INFO] 10.244.1.2:56284 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188608s
	[INFO] 10.244.1.2:58675 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149085s
	[INFO] 10.244.1.2:38911 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131505s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9b01ee2966e081085b732d62e68985fd9249574188499e7e99fa53ff3e585c2d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35530 - 6163 "HINFO IN 6373030861249236477.4474115650148028833. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02205233s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93] <==
	[INFO] 127.0.0.1:49588 - 50300 "HINFO IN 9047056621409016881.2982736294753326061. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063128768s
	[INFO] 10.244.0.4:39328 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.014249598s
	[INFO] 10.244.0.4:59759 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.013332957s
	[INFO] 10.244.0.4:50336 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.008865788s
	[INFO] 10.244.1.2:42753 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000158745s
	[INFO] 10.244.0.4:52334 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261159s
	[INFO] 10.244.0.4:43558 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010645205s
	[INFO] 10.244.0.4:51059 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154122s
	[INFO] 10.244.0.4:46147 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012594143s
	[INFO] 10.244.0.4:39163 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143755s
	[INFO] 10.244.0.4:57061 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014731s
	[INFO] 10.244.1.2:59502 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000129746s
	[INFO] 10.244.1.2:49570 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172915s
	[INFO] 10.244.1.2:48519 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000175653s
	[INFO] 10.244.0.4:50569 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000326714s
	[INFO] 10.244.0.4:45465 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000234038s
	[INFO] 10.244.1.2:52569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176154s
	[INFO] 10.244.1.2:36719 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000205481s
	[INFO] 10.244.1.2:58705 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195468s
	[INFO] 10.244.0.4:38035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216169s
	[INFO] 10.244.0.4:52287 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129599s
	[INFO] 10.244.0.4:37285 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000186803s
	[INFO] 10.244.1.2:39163 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165716s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fff949799c16ffb392a665b0e5af2f326948a468e2495b8ea2fa176e06b5cfbf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60701 - 36326 "HINFO IN 1706815658337671432.2830354807318160675. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06080012s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-326307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_23_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:23:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:46:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:45:24 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:45:24 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:45:24 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:45:24 +0000   Fri, 19 Sep 2025 22:23:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-326307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6ba0924deaa4643b45558c406a92530
	  System UUID:                9c3f30ed-68b2-4a1c-af95-9031ae210a78
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-m8swj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-66bc5c9577-9j5pw             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23m
	  kube-system                 coredns-66bc5c9577-wqvzd             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23m
	  kube-system                 etcd-ha-326307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         23m
	  kube-system                 kindnet-gxnzs                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23m
	  kube-system                 kube-apiserver-ha-326307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-326307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-8kxtv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-326307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-326307                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m32s                  kube-proxy       
	  Normal  Starting                 23m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)      kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)      kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)      kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                    kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  Starting                 23m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    23m                    kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  23m                    kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                    node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           8m13s                  node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  Starting                 6m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m41s (x8 over 6m41s)  kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s (x8 over 6m41s)  kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m41s (x7 over 6m41s)  kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m31s                  node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           6m31s                  node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           6m25s                  node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	
	
	Name:               ha-326307-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:46:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:43:43 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:43:43 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:43:43 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:43:43 +0000   Fri, 19 Sep 2025 22:24:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-326307-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 9fd69bf7d4de4d0cb4316de818a4daa2
	  System UUID:                8095cd89-f43b-4d8a-adef-b40d6aaa7ad2
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tfpvf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 etcd-ha-326307-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         22m
	  kube-system                 kindnet-mk6pv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22m
	  kube-system                 kube-apiserver-ha-326307-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-ha-326307-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-q8mtj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-ha-326307-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-vip-ha-326307-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 22m                    kube-proxy       
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  NodeAllocatableEnforced  8m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m19s (x7 over 8m19s)  kubelet          Node ha-326307-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m19s (x8 over 8m19s)  kubelet          Node ha-326307-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m19s (x8 over 8m19s)  kubelet          Node ha-326307-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m19s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m13s                  node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  Starting                 6m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m38s (x8 over 6m38s)  kubelet          Node ha-326307-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m38s (x8 over 6m38s)  kubelet          Node ha-326307-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m38s (x7 over 6m38s)  kubelet          Node ha-326307-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m31s                  node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           6m31s                  node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           6m25s                  node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	
	
	==> dmesg <==
	[Sep19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001853] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001006] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.090013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.467231] i8042: Warning: Keylock active
	[  +0.009747] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004210] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001075] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000901] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000908] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001042] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001465] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.534799] block sda: the capability attribute has been deprecated.
	[  +0.099617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027269] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.089616] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6] <==
	{"level":"info","ts":"2025-09-19T22:40:24.177644Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:40:24.185512Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:40:24.185980Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.175107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:51430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:47.201772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:51452","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:46:47.211965Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 16449250771884659557)"}
	{"level":"info","ts":"2025-09-19T22:46:47.213841Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"5512420eb470d1ce","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-19T22:46:47.213908Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.213977Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:46:47.214000Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.214039Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.213998Z","caller":"etcdserver/server.go:718","msg":"rejected Raft message from removed member","local-member-id":"aec36adc501070cc","removed-member-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.214075Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2025-09-19T22:46:47.214126Z","caller":"etcdserver/server.go:718","msg":"rejected Raft message from removed member","local-member-id":"aec36adc501070cc","removed-member-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.214134Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2025-09-19T22:46:47.214052Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:46:47.214191Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.214316Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","error":"context canceled"}
	{"level":"warn","ts":"2025-09-19T22:46:47.214372Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"5512420eb470d1ce","error":"failed to read 5512420eb470d1ce on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-09-19T22:46:47.214404Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.214547Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","error":"context canceled"}
	{"level":"info","ts":"2025-09-19T22:46:47.214582Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:46:47.214605Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:46:47.214619Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.224066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:46740","server-name":"","error":"EOF"}
	
	
	==> etcd [e5c59a6abe97751de42afd27010936d1c3a401fad6cd730e75a1692a895b4fbc] <==
	{"level":"info","ts":"2025-09-19T22:39:52.140938Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-09-19T22:39:52.162339Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082596421131,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-19T22:39:52.340049Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"1.996479221s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-09-19T22:39:52.340124Z","caller":"traceutil/trace.go:172","msg":"trace[586308872] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"1.996568167s","start":"2025-09-19T22:39:50.343542Z","end":"2025-09-19T22:39:52.340111Z","steps":["trace[586308872] 'agreement among raft nodes before linearized reading'  (duration: 1.996477658s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:39:52.340628Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:39:50.343527Z","time spent":"1.997078725s","remote":"127.0.0.1:36004","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2025/09/19 22:39:52 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-09-19T22:39:52.496622Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:39:45.496513Z","time spent":"7.000101766s","remote":"127.0.0.1:36464","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	{"level":"warn","ts":"2025-09-19T22:39:52.664567Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082596421131,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-19T22:39:53.164691Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082596421131,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-19T22:39:53.664930Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082596421131,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-09-19T22:39:53.841224Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-19T22:39:53.841312Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-19T22:39:53.841335Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4070] sent MsgPreVote request to 5512420eb470d1ce at term 2"}
	{"level":"info","ts":"2025-09-19T22:39:53.841349Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4070] sent MsgPreVote request to e4477a6cd7815365 at term 2"}
	{"level":"info","ts":"2025-09-19T22:39:53.841387Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-09-19T22:39:53.841403Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-09-19T22:39:53.856629Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"10.006331529s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-09-19T22:39:53.856703Z","caller":"traceutil/trace.go:172","msg":"trace[357958415] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"10.006425985s","start":"2025-09-19T22:39:43.850264Z","end":"2025-09-19T22:39:53.856690Z","steps":["trace[357958415] 'agreement among raft nodes before linearized reading'  (duration: 10.006330214s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:39:53.856753Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:39:43.850240Z","time spent":"10.006497987s","remote":"127.0.0.1:36302","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	2025/09/19 22:39:53 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-09-19T22:39:54.165033Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082596421131,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-19T22:39:54.350624Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"1.999804258s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context deadline exceeded"}
	{"level":"info","ts":"2025-09-19T22:39:54.350972Z","caller":"traceutil/trace.go:172","msg":"trace[1511115829] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"2.00016656s","start":"2025-09-19T22:39:52.350791Z","end":"2025-09-19T22:39:54.350957Z","steps":["trace[1511115829] 'agreement among raft nodes before linearized reading'  (duration: 1.999802512s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:39:54.351034Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:39:52.350777Z","time spent":"2.000237823s","remote":"127.0.0.1:35978","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2025/09/19 22:39:54 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> kernel <==
	 22:46:53 up  1:29,  0 users,  load average: 2.36, 1.44, 1.10
	Linux ha-326307 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [365cc00c2e009eeed7e71d1202a4d406c12f0d9faee38762ba691eb2d7c71f89] <==
	I0919 22:39:10.992568       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:20.990595       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:39:20.990634       1 main.go:301] handling current node
	I0919 22:39:20.990655       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:39:20.990663       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:39:20.990874       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:39:20.990888       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:30.995276       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:39:30.995312       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:30.995572       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:39:30.995598       1 main.go:301] handling current node
	I0919 22:39:30.995611       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:39:30.995615       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:39:40.996306       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:39:40.996354       1 main.go:301] handling current node
	I0919 22:39:40.996386       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:39:40.996395       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:39:40.996628       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:39:40.996654       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:50.991728       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:39:50.991865       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:39:50.992227       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:39:50.992324       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:50.992803       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:39:50.992828       1 main.go:301] handling current node
	
	
	==> kindnet [fea1c0534d95d8681a40f476ef920c8ced5eb8897a63d871e66830a2e35509fc] <==
	I0919 22:46:11.327662       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:46:11.327920       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:46:11.327938       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:46:21.328030       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:46:21.328073       1 main.go:301] handling current node
	I0919 22:46:21.328087       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:46:21.328093       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:46:21.328336       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:46:21.328349       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:46:31.327485       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:46:31.327520       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:46:31.327776       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:46:31.327794       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:46:31.327897       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:46:31.327908       1 main.go:301] handling current node
	I0919 22:46:41.328117       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:46:41.328176       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:46:41.328398       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:46:41.328415       1 main.go:301] handling current node
	I0919 22:46:41.328447       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:46:41.328457       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:46:51.327464       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:46:51.327528       1 main.go:301] handling current node
	I0919 22:46:51.327543       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:46:51.327548       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5] <==
	I0919 22:40:19.279381       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	W0919 22:40:19.281370       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3 192.168.49.4]
	I0919 22:40:19.295421       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0919 22:40:19.295734       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:40:19.295813       1 policy_source.go:240] refreshing policies
	I0919 22:40:19.318977       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 22:40:19.385137       1 controller.go:667] quota admission added evaluator for: endpoints
	E0919 22:40:19.394148       1 controller.go:97] Error removing old endpoints from kubernetes service: Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:40:19.817136       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0919 22:40:20.175946       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 22:40:21.106965       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I0919 22:40:21.115392       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 22:40:22.902022       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:40:23.000359       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 22:40:23.094961       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0919 22:41:31.899871       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:41:34.521052       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:42:39.388525       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:42:45.838122       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:43:41.302570       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:44:00.530191       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:44:44.037874       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:45:10.813928       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:46:01.956836       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:46:26.916270       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [e80d65e3c7c18da87f7fa003e39382f5a4285ba4782fc295197421c6b882a161] <==
	E0919 22:39:54.523383       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.523431       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.526237       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.526320       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.522979       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527081       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527220       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527341       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527429       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527492       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527556       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527638       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528262       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528338       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528394       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528418       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528451       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528480       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528501       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533700       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533915       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533941       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533972       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533985       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533997       1 watcher.go:335] watch chan error: etcdserver: no leader
	
	
	==> kube-controller-manager [05ab0247624a7f8ffa6bc948e3abc3adc49911c297291eb0a6dd42e3df39f4cd] <==
	I0919 22:23:38.744532       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:23:38.744726       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0919 22:23:38.744739       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0919 22:23:38.744729       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:23:38.744737       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:23:38.744759       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 22:23:38.745195       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 22:23:38.745255       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:23:38.746448       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:23:38.748706       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 22:23:38.750017       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.750086       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:23:38.751270       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 22:23:38.760899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 22:23:38.760926       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 22:23:38.760971       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 22:23:38.765332       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.771790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:08.307746       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m02\" does not exist"
	I0919 22:24:08.319829       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:24:08.699971       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m02"
	E0919 22:24:31.036531       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8ztpb failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8ztpb\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:24:31.706808       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m03\" does not exist"
	I0919 22:24:31.736561       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:24:33.715916       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m03"
	
	
	==> kube-controller-manager [7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c] <==
	I0919 22:40:22.614855       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 22:40:22.616016       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0919 22:40:22.622579       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0919 22:40:22.624722       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 22:40:22.626205       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:40:22.627256       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 22:40:22.631207       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:40:22.638798       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:40:22.639864       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0919 22:40:22.639886       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:40:22.639904       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0919 22:40:22.640312       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 22:40:22.640328       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:40:22.640420       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 22:40:22.640606       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307"
	I0919 22:40:22.640638       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m03"
	I0919 22:40:22.640606       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m02"
	I0919 22:40:22.640694       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 22:40:22.946089       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-49s8d\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:40:22.946224       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e75e0609-6186-44fd-8674-15383f700490", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-49s8d": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:40:56.500901       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-49s8d\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:40:56.501810       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e75e0609-6186-44fd-8674-15383f700490", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-49s8d": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:40:57.687491       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-49s8d\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:40:57.688223       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e75e0609-6186-44fd-8674-15383f700490", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-49s8d": the object has been modified; please apply your changes to the latest version and try again
	E0919 22:46:46.068479       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-proxy [bd9e41958ffbbb27dab3d180a56fb27df36a1a1896db3a6f322a8aabcda57677] <==
	I0919 22:23:40.183862       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:23:40.251957       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:23:40.353105       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:23:40.353291       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:23:40.353503       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:23:40.383440       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:23:40.383522       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:23:40.391534       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:23:40.391999       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:23:40.392045       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:23:40.394189       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:23:40.394304       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:23:40.394470       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:23:40.394480       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:23:40.394266       1 config.go:200] "Starting service config controller"
	I0919 22:23:40.394506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:23:40.394279       1 config.go:309] "Starting node config controller"
	I0919 22:23:40.394533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:23:40.394540       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:23:40.494617       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:23:40.494643       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:23:40.494649       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c1e4cc3b9a7f1259a1339b951fd30079b99dc7acedc895c7ae90814405daad16] <==
	I0919 22:40:20.575328       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:40:20.672061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:40:20.772951       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:40:20.773530       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:40:20.774779       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:40:20.837591       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:40:20.837664       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:40:20.853483       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:40:20.853910       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:40:20.853934       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:40:20.859319       1 config.go:309] "Starting node config controller"
	I0919 22:40:20.859436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:40:20.859447       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:40:20.859941       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:40:20.859974       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:40:20.860439       1 config.go:200] "Starting service config controller"
	I0919 22:40:20.860604       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:40:20.861833       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:40:20.862286       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:40:20.960109       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:40:20.960793       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:40:20.962617       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [456a0c3cbf5ce028b9cbac658728c1fee13ad8e2659bfa0c625cd685d711c708] <==
	E0919 22:23:33.115040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:23:33.129278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:23:33.194774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:23:33.337699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0919 22:23:35.055089       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:24:08.346116       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:08.346301       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:08.365410       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-78xs2" node="ha-326307-m02"
	E0919 22:24:08.365600       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="kube-system/kindnet-78xs2"
	E0919 22:24:08.379248       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"kindnet-78xs2\" not found" pod="kube-system/kindnet-78xs2"
	E0919 22:24:10.002296       1 schedule_one.go:975] "Scheduler cache AssumePod failed" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:10.002334       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	I0919 22:24:10.002368       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:31.751287       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-pnj9r" node="ha-326307-m03"
	E0919 22:24:31.751375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-pnj9r"
	E0919 22:24:31.887089       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:31.887576       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 173e48ec-ef56-4824-9f55-a04b199b7943(kube-system/kindnet-qxwpq) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	E0919 22:24:31.887605       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	I0919 22:24:31.888969       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:35.828083       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-nzzlb" node="ha-326307-m03"
	E0919 22:24:35.828187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-nzzlb"
	E0919 22:24:35.839864       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	E0919 22:24:35.839940       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ba4fd407-2e93-4324-ab2d-4f192d79fdf5(kube-system/kindnet-dmxl8) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	E0919 22:24:35.839964       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	I0919 22:24:35.841757       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	
	
	==> kube-scheduler [63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284] <==
	I0919 22:40:14.121705       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:40:19.175600       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 22:40:19.175869       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 22:40:19.175952       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:40:19.175968       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:40:19.217556       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:40:19.217674       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:40:19.220816       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:40:19.221038       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:40:19.226224       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:40:19.226332       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:40:19.321477       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.402545     619 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.403468     619 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:40:19 ha-326307 kubelet[619]: E0919 22:40:19.407687     619 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-326307\" already exists" pod="kube-system/kube-apiserver-ha-326307"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.701084     619 apiserver.go:52] "Watching apiserver"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.707631     619 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-326307" podUID="36baecf0-60bd-41c0-a3c8-45e4f6ebddad"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.728881     619 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-326307"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.728907     619 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-326307"
	Sep 19 22:40:19 ha-326307 kubelet[619]: E0919 22:40:19.731920     619 status_manager.go:1041] "Failed to update status for pod" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36baecf0-60bd-41c0-a3c8-45e4f6ebddad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-19T22:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-19T22:40:12Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-19T22:40:13Z\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-19T22:40:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-19T22:40:12Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\
\\"containerd://83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad\\\",\\\"image\\\":\\\"ghcr.io/kube-vip/kube-vip:v1.0.0\\\",\\\"imageID\\\":\\\"ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-vip\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-19T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/admin.conf\\\",\\\"name\\\":\\\"kubeconfig\\\"}]}],\\\"startTime\\\":\\\"2025-09-19T22:40:12Z\\\"}}\" for pod \"kube-system\"/\"kube-vip-ha-326307\": pods \"kube-vip-ha-326307\" not found" pod="kube-system/kube-vip-ha-326307"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.801129     619 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813377     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cafe04c6-2dce-4b93-b6d1-205efc39b360-tmp\") pod \"storage-provisioner\" (UID: \"cafe04c6-2dce-4b93-b6d1-205efc39b360\") " pod="kube-system/storage-provisioner"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813554     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-xtables-lock\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813666     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70be5fcc-7ab6-4eb1-870d-988fee1a01bb-xtables-lock\") pod \"kube-proxy-8kxtv\" (UID: \"70be5fcc-7ab6-4eb1-870d-988fee1a01bb\") " pod="kube-system/kube-proxy-8kxtv"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813815     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-lib-modules\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813849     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70be5fcc-7ab6-4eb1-870d-988fee1a01bb-lib-modules\") pod \"kube-proxy-8kxtv\" (UID: \"70be5fcc-7ab6-4eb1-870d-988fee1a01bb\") " pod="kube-system/kube-proxy-8kxtv"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813876     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-cni-cfg\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.823375     619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-326307" podStartSLOduration=0.823354362 podStartE2EDuration="823.354362ms" podCreationTimestamp="2025-09-19 22:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:40:19.822728814 +0000 UTC m=+7.186818639" watchObservedRunningTime="2025-09-19 22:40:19.823354362 +0000 UTC m=+7.187444186"
	Sep 19 22:40:20 ha-326307 kubelet[619]: I0919 22:40:20.739430     619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fb2219973c6b37a95b47a05e51f4922" path="/var/lib/kubelet/pods/5fb2219973c6b37a95b47a05e51f4922/volumes"
	Sep 19 22:40:21 ha-326307 kubelet[619]: I0919 22:40:21.854071     619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 19 22:40:26 ha-326307 kubelet[619]: I0919 22:40:26.469144     619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 19 22:40:27 ha-326307 kubelet[619]: I0919 22:40:27.660037     619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 19 22:40:50 ha-326307 kubelet[619]: I0919 22:40:50.939471     619 scope.go:117] "RemoveContainer" containerID="f52d2d9f5881b5d50f95a3aeef2c876d51dd6b2be6c464a7464b26e8175b0fc6"
	Sep 19 22:40:50 ha-326307 kubelet[619]: I0919 22:40:50.939831     619 scope.go:117] "RemoveContainer" containerID="a7d6081c4523a1615c9325b1139e2303619e28b6fc78896684594ac51dc7c0d2"
	Sep 19 22:40:50 ha-326307 kubelet[619]: E0919 22:40:50.940028     619 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cafe04c6-2dce-4b93-b6d1-205efc39b360)\"" pod="kube-system/storage-provisioner" podUID="cafe04c6-2dce-4b93-b6d1-205efc39b360"
	Sep 19 22:41:02 ha-326307 kubelet[619]: I0919 22:41:02.729182     619 scope.go:117] "RemoveContainer" containerID="a7d6081c4523a1615c9325b1139e2303619e28b6fc78896684594ac51dc7c0d2"
	Sep 19 22:41:12 ha-326307 kubelet[619]: I0919 22:41:12.720023     619 scope.go:117] "RemoveContainer" containerID="d1c181558b58c3ebc7dae5df97b80119a8c6d18dc19b0532b61999c4db0c6668"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-326307 -n ha-326307
helpers_test.go:269: (dbg) Run:  kubectl --context ha-326307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-n7chr
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-326307 describe pod busybox-7b57f96db7-n7chr
helpers_test.go:290: (dbg) kubectl --context ha-326307 describe pod busybox-7b57f96db7-n7chr:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-n7chr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fzr8g (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fzr8g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age              From               Message
	  ----     ------            ----             ----               -------
	  Warning  FailedScheduling  8s               default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  8s               default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  8s               default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  8s               default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  8s (x2 over 9s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (9.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-326307" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-326307\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-326307\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares
\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.0\",\"ClusterName\":\"ha-326307\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"containerd\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"containerd\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m
02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"containerd\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\
":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"
SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-326307
helpers_test.go:243: (dbg) docker inspect ha-326307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	        "Created": "2025-09-19T22:23:18.619000062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 103141,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:40:06.624789529Z",
	            "FinishedAt": "2025-09-19T22:40:05.96037119Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hosts",
	        "LogPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3-json.log",
	        "Name": "/ha-326307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-326307:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-326307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	                "LowerDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-326307",
	                "Source": "/var/lib/docker/volumes/ha-326307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-326307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-326307",
	                "name.minikube.sigs.k8s.io": "ha-326307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "06e56c61a506ab53aec79a320b27a6a2cf564500e22874ecad29c9521c3f21e9",
	            "SandboxKey": "/var/run/docker/netns/06e56c61a506",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32819"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32820"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32823"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32821"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32822"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-326307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:8a:0a:e2:38:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "465af21e2d8d112a34f14f7c3ab89eac4e6e57f582ff8e32b514381f55dd085e",
	                    "EndpointID": "bf734c63b8ebe83bbbed163afe56c19f4973081d194aed0cefd76108129a5748",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-326307",
	                        "5e0f1fe86b08"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-326307 -n ha-326307
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-326307 logs -n 25: (1.627886691s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m02 sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m02.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ cp      │ ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307-m04:/home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp testdata/cp-test.txt ha-326307-m04:/home/docker/cp-test.txt                                                            │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile865153148/001/cp-test_ha-326307-m04.txt │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307:/home/docker/cp-test_ha-326307-m04_ha-326307.txt                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307.txt                                                │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m02:/home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m02 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m03:/home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ node    │ ha-326307 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ node    │ ha-326307 node start m02 --alsologtostderr -v 5                                                                                     │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ node    │ ha-326307 node list --alsologtostderr -v 5                                                                                          │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:39 UTC │                     │
	│ stop    │ ha-326307 stop --alsologtostderr -v 5                                                                                               │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:39 UTC │ 19 Sep 25 22:40 UTC │
	│ start   │ ha-326307 start --wait true --alsologtostderr -v 5                                                                                  │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:40 UTC │                     │
	│ node    │ ha-326307 node list --alsologtostderr -v 5                                                                                          │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:46 UTC │                     │
	│ node    │ ha-326307 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:46 UTC │ 19 Sep 25 22:46 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:40:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:40:06.378966  102947 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:40:06.379330  102947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:40:06.379341  102947 out.go:374] Setting ErrFile to fd 2...
	I0919 22:40:06.379345  102947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:40:06.379571  102947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:40:06.380057  102947 out.go:368] Setting JSON to false
	I0919 22:40:06.381142  102947 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4950,"bootTime":1758316656,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:40:06.381289  102947 start.go:140] virtualization: kvm guest
	I0919 22:40:06.383708  102947 out.go:179] * [ha-326307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:40:06.385240  102947 notify.go:220] Checking for updates...
	I0919 22:40:06.385299  102947 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:40:06.386659  102947 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:40:06.388002  102947 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:40:06.389281  102947 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 22:40:06.390761  102947 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:40:06.392296  102947 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:40:06.394377  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:06.394567  102947 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:40:06.419564  102947 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:40:06.419671  102947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:40:06.482479  102947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:40:06.471430741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:40:06.482585  102947 docker.go:318] overlay module found
	I0919 22:40:06.484475  102947 out.go:179] * Using the docker driver based on existing profile
	I0919 22:40:06.485822  102947 start.go:304] selected driver: docker
	I0919 22:40:06.485843  102947 start.go:918] validating driver "docker" against &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:40:06.485989  102947 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:40:06.486131  102947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:40:06.542030  102947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:40:06.531788772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:40:06.542709  102947 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:40:06.542747  102947 cni.go:84] Creating CNI manager for ""
	I0919 22:40:06.542808  102947 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:40:06.542862  102947 start.go:348] cluster config:
	{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:40:06.544976  102947 out.go:179] * Starting "ha-326307" primary control-plane node in "ha-326307" cluster
	I0919 22:40:06.546636  102947 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:40:06.548781  102947 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:40:06.550349  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:06.550411  102947 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 22:40:06.550421  102947 cache.go:58] Caching tarball of preloaded images
	I0919 22:40:06.550484  102947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:40:06.550539  102947 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:40:06.550548  102947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:40:06.550672  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:06.573025  102947 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:40:06.573049  102947 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:40:06.573066  102947 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:40:06.573093  102947 start.go:360] acquireMachinesLock for ha-326307: {Name:mk42b79b90944aab63c8b37c2f94e04ca1ebec1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:40:06.573185  102947 start.go:364] duration metric: took 59.872µs to acquireMachinesLock for "ha-326307"
	I0919 22:40:06.573210  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:40:06.573217  102947 fix.go:54] fixHost starting: 
	I0919 22:40:06.573525  102947 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:40:06.592648  102947 fix.go:112] recreateIfNeeded on ha-326307: state=Stopped err=<nil>
	W0919 22:40:06.592678  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:40:06.594861  102947 out.go:252] * Restarting existing docker container for "ha-326307" ...
	I0919 22:40:06.594935  102947 cli_runner.go:164] Run: docker start ha-326307
	I0919 22:40:06.849585  102947 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:40:06.870075  102947 kic.go:430] container "ha-326307" state is running.
	I0919 22:40:06.870543  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:40:06.891652  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:06.891897  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:40:06.891960  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:06.913541  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:06.913830  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32819 <nil> <nil>}
	I0919 22:40:06.913845  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:40:06.914579  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60650->127.0.0.1:32819: read: connection reset by peer
	I0919 22:40:10.057342  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:40:10.057370  102947 ubuntu.go:182] provisioning hostname "ha-326307"
	I0919 22:40:10.057448  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:10.076664  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:10.076914  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32819 <nil> <nil>}
	I0919 22:40:10.076932  102947 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307 && echo "ha-326307" | sudo tee /etc/hostname
	I0919 22:40:10.228297  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:40:10.228362  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:10.247319  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:10.247573  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32819 <nil> <nil>}
	I0919 22:40:10.247594  102947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:40:10.386261  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:40:10.386297  102947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:40:10.386346  102947 ubuntu.go:190] setting up certificates
	I0919 22:40:10.386360  102947 provision.go:84] configureAuth start
	I0919 22:40:10.386416  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:40:10.407761  102947 provision.go:143] copyHostCerts
	I0919 22:40:10.407810  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:10.407855  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:40:10.407875  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:10.407957  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:40:10.408069  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:10.408095  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:40:10.408103  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:10.408148  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:40:10.408242  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:10.408268  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:40:10.408278  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:10.408327  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:40:10.408399  102947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307 san=[127.0.0.1 192.168.49.2 ha-326307 localhost minikube]
	I0919 22:40:10.713645  102947 provision.go:177] copyRemoteCerts
	I0919 22:40:10.713742  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:40:10.713785  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:10.733589  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:10.833003  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:40:10.833079  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:40:10.860656  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:40:10.860740  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 22:40:10.888926  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:40:10.889032  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:40:10.916393  102947 provision.go:87] duration metric: took 530.019982ms to configureAuth
	I0919 22:40:10.916415  102947 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:40:10.916623  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:10.916638  102947 machine.go:96] duration metric: took 4.024727048s to provisionDockerMachine
	I0919 22:40:10.916646  102947 start.go:293] postStartSetup for "ha-326307" (driver="docker")
	I0919 22:40:10.916656  102947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:40:10.916705  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:40:10.916774  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:10.935896  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:11.036597  102947 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:40:11.040388  102947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:40:11.040431  102947 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:40:11.040440  102947 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:40:11.040446  102947 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:40:11.040457  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:40:11.040518  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:40:11.040597  102947 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:40:11.040608  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:40:11.040710  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:40:11.050512  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:11.077986  102947 start.go:296] duration metric: took 161.32783ms for postStartSetup
	I0919 22:40:11.078088  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:40:11.078139  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:11.099514  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:11.193605  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:40:11.198421  102947 fix.go:56] duration metric: took 4.625199971s for fixHost
	I0919 22:40:11.198447  102947 start.go:83] releasing machines lock for "ha-326307", held for 4.625246732s
	I0919 22:40:11.198524  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:40:11.217572  102947 ssh_runner.go:195] Run: cat /version.json
	I0919 22:40:11.217596  102947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:40:11.217615  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:11.217666  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:11.238048  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:11.238195  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:11.415017  102947 ssh_runner.go:195] Run: systemctl --version
	I0919 22:40:11.420537  102947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:40:11.425907  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:40:11.447016  102947 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:40:11.447107  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:40:11.457668  102947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:40:11.457703  102947 start.go:495] detecting cgroup driver to use...
	I0919 22:40:11.457740  102947 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:40:11.457803  102947 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:40:11.473712  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:40:11.486915  102947 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:40:11.486970  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:40:11.501818  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:40:11.514985  102947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:40:11.582004  102947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:40:11.651320  102947 docker.go:234] disabling docker service ...
	I0919 22:40:11.651379  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:40:11.665822  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:40:11.678416  102947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:40:11.746878  102947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:40:11.815384  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:40:11.828348  102947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:40:11.847640  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:40:11.859649  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:40:11.871696  102947 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:40:11.871768  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:40:11.883197  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:11.894832  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:40:11.906582  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:11.918458  102947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:40:11.929108  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:40:11.940521  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:40:11.952577  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:40:11.963963  102947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:40:11.974367  102947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:40:11.985259  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:12.050391  102947 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:40:12.169871  102947 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:40:12.169947  102947 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:40:12.174079  102947 start.go:563] Will wait 60s for crictl version
	I0919 22:40:12.174139  102947 ssh_runner.go:195] Run: which crictl
	I0919 22:40:12.177946  102947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:40:12.213111  102947 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:40:12.213183  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:12.237742  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:12.267221  102947 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:40:12.268667  102947 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:40:12.287123  102947 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:40:12.291375  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:12.304417  102947 kubeadm.go:875] updating cluster {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:40:12.304576  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:12.304623  102947 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:40:12.341103  102947 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:40:12.341184  102947 containerd.go:534] Images already preloaded, skipping extraction
	I0919 22:40:12.341271  102947 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:40:12.378884  102947 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:40:12.378907  102947 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:40:12.378916  102947 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0919 22:40:12.379030  102947 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:40:12.379093  102947 ssh_runner.go:195] Run: sudo crictl info
	I0919 22:40:12.415076  102947 cni.go:84] Creating CNI manager for ""
	I0919 22:40:12.415100  102947 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:40:12.415111  102947 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:40:12.415129  102947 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-326307 NodeName:ha-326307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:40:12.415290  102947 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-326307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:40:12.415312  102947 kube-vip.go:115] generating kube-vip config ...
	I0919 22:40:12.415360  102947 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:40:12.428658  102947 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:40:12.428770  102947 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:40:12.428823  102947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:40:12.438647  102947 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:40:12.438722  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:40:12.448707  102947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 22:40:12.468517  102947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:40:12.488929  102947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0919 22:40:12.510232  102947 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:40:12.530559  102947 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:40:12.534624  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:12.548237  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:12.611595  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:12.634054  102947 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.2
	I0919 22:40:12.634076  102947 certs.go:194] generating shared ca certs ...
	I0919 22:40:12.634091  102947 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:12.634256  102947 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:40:12.634323  102947 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:40:12.634335  102947 certs.go:256] generating profile certs ...
	I0919 22:40:12.634435  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:40:12.634462  102947 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.12c02704
	I0919 22:40:12.634473  102947 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.12c02704 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:40:12.848520  102947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.12c02704 ...
	I0919 22:40:12.848550  102947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.12c02704: {Name:mkec91c90022534b703be5f6d2ae62638fdba9da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:12.848737  102947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.12c02704 ...
	I0919 22:40:12.848755  102947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.12c02704: {Name:mka1bfb464462bf578809e209441ee38ad333adf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:12.848871  102947 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.12c02704 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:40:12.849067  102947 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.12c02704 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:40:12.849277  102947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:40:12.849295  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:40:12.849315  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:40:12.849337  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:40:12.849355  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:40:12.849373  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:40:12.849392  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:40:12.849410  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:40:12.849430  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:40:12.849610  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:40:12.849684  102947 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:40:12.849700  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:40:12.849733  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:40:12.849775  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:40:12.849812  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:40:12.849872  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:12.849915  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:40:12.849936  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:40:12.849955  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:12.850570  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:40:12.881412  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:40:12.909365  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:40:12.936570  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:40:12.963699  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:40:12.991460  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:40:13.019268  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:40:13.046670  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:40:13.074069  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:40:13.101424  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:40:13.128690  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:40:13.156653  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:40:13.179067  102947 ssh_runner.go:195] Run: openssl version
	I0919 22:40:13.187620  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:40:13.203083  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:40:13.209838  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:40:13.209911  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:40:13.220919  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:40:13.238903  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:40:13.253729  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:40:13.261626  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:40:13.261780  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:40:13.272880  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:40:13.287661  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:40:13.303848  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:13.308762  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:13.308833  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:13.319788  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:40:13.336323  102947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:40:13.343266  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:40:13.355799  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:40:13.367939  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:40:13.378087  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:40:13.388839  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:40:13.399528  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:40:13.412341  102947 kubeadm.go:392] StartCluster: {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:40:13.412499  102947 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 22:40:13.412584  102947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:40:13.476121  102947 cri.go:89] found id: "83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad"
	I0919 22:40:13.476178  102947 cri.go:89] found id: "63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284"
	I0919 22:40:13.476184  102947 cri.go:89] found id: "7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c"
	I0919 22:40:13.476189  102947 cri.go:89] found id: "c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6"
	I0919 22:40:13.476197  102947 cri.go:89] found id: "e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5"
	I0919 22:40:13.476204  102947 cri.go:89] found id: "d1c181558b58c3ebc7dae5df97b80119a8c6d18dc19b0532b61999c4db0c6668"
	I0919 22:40:13.476209  102947 cri.go:89] found id: "ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93"
	I0919 22:40:13.476214  102947 cri.go:89] found id: "1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6"
	I0919 22:40:13.476221  102947 cri.go:89] found id: "f52d2d9f5881b5d50f95a3aeef2c876d51dd6b2be6c464a7464b26e8175b0fc6"
	I0919 22:40:13.476232  102947 cri.go:89] found id: "365cc00c2e009eeed7e71d1202a4d406c12f0d9faee38762ba691eb2d7c71f89"
	I0919 22:40:13.476255  102947 cri.go:89] found id: "bd9e41958ffbbb27dab3d180a56fb27df36a1a1896db3a6f322a8aabcda57677"
	I0919 22:40:13.476262  102947 cri.go:89] found id: "456a0c3cbf5ce028b9cbac658728c1fee13ad8e2659bfa0c625cd685d711c708"
	I0919 22:40:13.476267  102947 cri.go:89] found id: "05ab0247624a7f8ffa6bc948e3abc3adc49911c297291eb0a6dd42e3df39f4cd"
	I0919 22:40:13.476272  102947 cri.go:89] found id: "e5c59a6abe97751de42afd27010936d1c3a401fad6cd730e75a1692a895b4fbc"
	I0919 22:40:13.476278  102947 cri.go:89] found id: "e80d65e3c7c18da87f7fa003e39382f5a4285ba4782fc295197421c6b882a161"
	I0919 22:40:13.476285  102947 cri.go:89] found id: ""
	I0919 22:40:13.476358  102947 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0919 22:40:13.511540  102947 cri.go:116] JSON = [{"ociVersion":"1.2.0","id":"35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92","pid":903,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92/rootfs","created":"2025-09-19T22:40:13.265497632Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-ha-326307_57c850ed4c5abebc96f109c9dc04f98c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-ha-3263
07","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"57c850ed4c5abebc96f109c9dc04f98c"},"owner":"root"},{"ociVersion":"1.2.0","id":"4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f","pid":851,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f/rootfs","created":"2025-09-19T22:40:13.237289545Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-ha-326307_f6c96a149704fe94a8f3f9671ba1a8ff","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f6c96a149704fe94a8f3f9671ba1a8ff"},"owner":"root"},{"ociVersion":"1.2.0","id":"63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284","pid":1109,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284/rootfs","created":"2025-09-19T22:40:13.452193435Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri.sandbox-id":"b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248","io.kubernetes.cri.sandbox-name":"kube-scheduler-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-s
ystem","io.kubernetes.cri.sandbox-uid":"02be84f36b44ed11e0db130395870414"},"owner":"root"},{"ociVersion":"1.2.0","id":"7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c","pid":1081,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c/rootfs","created":"2025-09-19T22:40:13.445726517Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri.sandbox-id":"35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92","io.kubernetes.cri.sandbox-name":"kube-controller-manager-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"57c850ed4c5abebc96f109c9dc04f98c"},"owner":"r
oot"},{"ociVersion":"1.2.0","id":"8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d","pid":926,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d/rootfs","created":"2025-09-19T22:40:13.291697374Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-vip-ha-326307_11fc7e0ddcb5f54efe3aa73e9d205abc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-vip-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-ui
d":"11fc7e0ddcb5f54efe3aa73e9d205abc"},"owner":"root"},{"ociVersion":"1.2.0","id":"83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad","pid":1117,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad/rootfs","created":"2025-09-19T22:40:13.459929825Z","annotations":{"io.kubernetes.cri.container-name":"kube-vip","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri.sandbox-id":"8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d","io.kubernetes.cri.sandbox-name":"kube-vip-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"11fc7e0ddcb5f54efe3aa73e9d205abc"},"owner":"root"},{"ociVersion":"1.2.0","id":"a85600718119d4e698fdb29d033016634be8e463e4c29a4
b1f9b6778b83c3910","pid":850,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910/rootfs","created":"2025-09-19T22:40:13.246511214Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-ha-326307_044bbdcbe96821df073716c7f05fb17d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"044bbdcbe96821df073716c7f05fb17d"},"owner":"root"},{"ociVersion":"1.2.0","id":"b84e
223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248","pid":911,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248/rootfs","created":"2025-09-19T22:40:13.280883406Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-ha-326307_02be84f36b44ed11e0db130395870414","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"02be84f36b44ed11e0db
130395870414"},"owner":"root"},{"ociVersion":"1.2.0","id":"c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6","pid":1090,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6/rootfs","created":"2025-09-19T22:40:13.443035858Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910","io.kubernetes.cri.sandbox-name":"etcd-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"044bbdcbe96821df073716c7f05fb17d"},"owner":"root"},{"ociVersion":"1.2.0","id":"e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5","pid":1007,"statu
s":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5/rootfs","created":"2025-09-19T22:40:13.41525993Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri.sandbox-id":"4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f","io.kubernetes.cri.sandbox-name":"kube-apiserver-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f6c96a149704fe94a8f3f9671ba1a8ff"},"owner":"root"}]
	I0919 22:40:13.511763  102947 cri.go:126] list returned 10 containers
	I0919 22:40:13.511789  102947 cri.go:129] container: {ID:35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92 Status:running}
	I0919 22:40:13.511829  102947 cri.go:131] skipping 35b9028490f7623e95322d9be2b2f2e164459bdea8b430e7e5cfa9e52b3c5d92 - not in ps
	I0919 22:40:13.511840  102947 cri.go:129] container: {ID:4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f Status:running}
	I0919 22:40:13.511848  102947 cri.go:131] skipping 4ff7be1cea5766e07821990bb234776471460b3c71f40a9e6769aec8dc87ce1f - not in ps
	I0919 22:40:13.511854  102947 cri.go:129] container: {ID:63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284 Status:running}
	I0919 22:40:13.511864  102947 cri.go:135] skipping {63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284 running}: state = "running", want "paused"
	I0919 22:40:13.511877  102947 cri.go:129] container: {ID:7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c Status:running}
	I0919 22:40:13.511890  102947 cri.go:135] skipping {7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c running}: state = "running", want "paused"
	I0919 22:40:13.511898  102947 cri.go:129] container: {ID:8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d Status:running}
	I0919 22:40:13.511910  102947 cri.go:131] skipping 8124d18d08f1c0cedf731af0f54fa6a88197b5dae7d35fd782fd968859eae78d - not in ps
	I0919 22:40:13.511916  102947 cri.go:129] container: {ID:83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad Status:running}
	I0919 22:40:13.511925  102947 cri.go:135] skipping {83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad running}: state = "running", want "paused"
	I0919 22:40:13.511935  102947 cri.go:129] container: {ID:a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910 Status:running}
	I0919 22:40:13.511941  102947 cri.go:131] skipping a85600718119d4e698fdb29d033016634be8e463e4c29a4b1f9b6778b83c3910 - not in ps
	I0919 22:40:13.511946  102947 cri.go:129] container: {ID:b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248 Status:running}
	I0919 22:40:13.511951  102947 cri.go:131] skipping b84e223a297e40de82eca900264a3bca33e38f2f97cff1801c215ecfb604c248 - not in ps
	I0919 22:40:13.511957  102947 cri.go:129] container: {ID:c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6 Status:running}
	I0919 22:40:13.511969  102947 cri.go:135] skipping {c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6 running}: state = "running", want "paused"
	I0919 22:40:13.511976  102947 cri.go:129] container: {ID:e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5 Status:running}
	I0919 22:40:13.511988  102947 cri.go:135] skipping {e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5 running}: state = "running", want "paused"
	I0919 22:40:13.512041  102947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:40:13.524546  102947 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:40:13.524567  102947 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:40:13.524627  102947 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:40:13.537544  102947 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:40:13.538084  102947 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-326307" does not appear in /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:40:13.538273  102947 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14678/kubeconfig needs updating (will repair): [kubeconfig missing "ha-326307" cluster setting kubeconfig missing "ha-326307" context setting]
	I0919 22:40:13.538666  102947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:13.539452  102947 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:40:13.540084  102947 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:40:13.540104  102947 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:40:13.540111  102947 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:40:13.540118  102947 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:40:13.540125  102947 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:40:13.540609  102947 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:40:13.540743  102947 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:40:13.555466  102947 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:40:13.555575  102947 kubeadm.go:593] duration metric: took 31.000137ms to restartPrimaryControlPlane
	I0919 22:40:13.555603  102947 kubeadm.go:394] duration metric: took 143.274252ms to StartCluster
	I0919 22:40:13.555651  102947 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:13.555800  102947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:40:13.556731  102947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:13.557204  102947 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:40:13.557402  102947 start.go:241] waiting for startup goroutines ...
	I0919 22:40:13.557267  102947 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:40:13.557510  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:13.561726  102947 out.go:179] * Enabled addons: 
	I0919 22:40:13.563479  102947 addons.go:514] duration metric: took 6.21303ms for enable addons: enabled=[]
	I0919 22:40:13.563535  102947 start.go:246] waiting for cluster config update ...
	I0919 22:40:13.563548  102947 start.go:255] writing updated cluster config ...
	I0919 22:40:13.565943  102947 out.go:203] 
	I0919 22:40:13.568105  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:13.568246  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:13.570538  102947 out.go:179] * Starting "ha-326307-m02" control-plane node in "ha-326307" cluster
	I0919 22:40:13.572566  102947 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:40:13.574955  102947 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:40:13.576797  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:13.576835  102947 cache.go:58] Caching tarball of preloaded images
	I0919 22:40:13.576935  102947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:40:13.576982  102947 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:40:13.576999  102947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:40:13.577147  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:13.603282  102947 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:40:13.603304  102947 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:40:13.603323  102947 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:40:13.603356  102947 start.go:360] acquireMachinesLock for ha-326307-m02: {Name:mk4919a9b19250804b0f53d01bcd11efaf9a431f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:40:13.603419  102947 start.go:364] duration metric: took 47.152µs to acquireMachinesLock for "ha-326307-m02"
	I0919 22:40:13.603445  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:40:13.603459  102947 fix.go:54] fixHost starting: m02
	I0919 22:40:13.603697  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:40:13.626324  102947 fix.go:112] recreateIfNeeded on ha-326307-m02: state=Stopped err=<nil>
	W0919 22:40:13.626352  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:40:13.629640  102947 out.go:252] * Restarting existing docker container for "ha-326307-m02" ...
	I0919 22:40:13.629728  102947 cli_runner.go:164] Run: docker start ha-326307-m02
	I0919 22:40:13.926841  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:40:13.950131  102947 kic.go:430] container "ha-326307-m02" state is running.
	I0919 22:40:13.950515  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:40:13.973194  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:13.973503  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:40:13.973577  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:13.996029  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:13.996469  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32824 <nil> <nil>}
	I0919 22:40:13.996495  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:40:13.997409  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55282->127.0.0.1:32824: read: connection reset by peer
	I0919 22:40:17.135269  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:40:17.135298  102947 ubuntu.go:182] provisioning hostname "ha-326307-m02"
	I0919 22:40:17.135359  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:17.155772  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:17.156086  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32824 <nil> <nil>}
	I0919 22:40:17.156103  102947 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m02 && echo "ha-326307-m02" | sudo tee /etc/hostname
	I0919 22:40:17.308282  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:40:17.308354  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:17.329394  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:17.329602  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32824 <nil> <nil>}
	I0919 22:40:17.329620  102947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:40:17.469105  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:40:17.469136  102947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:40:17.469173  102947 ubuntu.go:190] setting up certificates
	I0919 22:40:17.469188  102947 provision.go:84] configureAuth start
	I0919 22:40:17.469243  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:40:17.489456  102947 provision.go:143] copyHostCerts
	I0919 22:40:17.489512  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:17.489551  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:40:17.489560  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:17.489629  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:40:17.489711  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:17.489728  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:40:17.489735  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:17.489771  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:40:17.489846  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:17.489864  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:40:17.489870  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:17.489896  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:40:17.489952  102947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m02 san=[127.0.0.1 192.168.49.3 ha-326307-m02 localhost minikube]
	I0919 22:40:17.687121  102947 provision.go:177] copyRemoteCerts
	I0919 22:40:17.687196  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:40:17.687230  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:17.706618  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:17.805482  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:40:17.805552  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:40:17.834469  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:40:17.834533  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:40:17.862491  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:40:17.862578  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:40:17.891048  102947 provision.go:87] duration metric: took 421.847088ms to configureAuth
	I0919 22:40:17.891077  102947 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:40:17.891323  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:17.891337  102947 machine.go:96] duration metric: took 3.917817402s to provisionDockerMachine
	I0919 22:40:17.891348  102947 start.go:293] postStartSetup for "ha-326307-m02" (driver="docker")
	I0919 22:40:17.891362  102947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:40:17.891426  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:40:17.891475  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:17.911877  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:18.017574  102947 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:40:18.021564  102947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:40:18.021608  102947 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:40:18.021620  102947 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:40:18.021627  102947 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:40:18.021641  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:40:18.021732  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:40:18.021827  102947 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:40:18.021845  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:40:18.021965  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:40:18.037625  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:18.072355  102947 start.go:296] duration metric: took 180.992211ms for postStartSetup
	I0919 22:40:18.072434  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:40:18.072488  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:18.097080  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:18.200976  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:40:18.207724  102947 fix.go:56] duration metric: took 4.604261714s for fixHost
	I0919 22:40:18.207752  102947 start.go:83] releasing machines lock for "ha-326307-m02", held for 4.604318809s
	I0919 22:40:18.207819  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:40:18.233683  102947 out.go:179] * Found network options:
	I0919 22:40:18.235326  102947 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:40:18.236979  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:40:18.237024  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:40:18.237101  102947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:40:18.237148  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:18.237186  102947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:40:18.237248  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:40:18.262883  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:18.265825  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32824 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:40:18.472261  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:40:18.501316  102947 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:40:18.501403  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:40:18.517881  102947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:40:18.517907  102947 start.go:495] detecting cgroup driver to use...
	I0919 22:40:18.517943  102947 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:40:18.518009  102947 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:40:18.540215  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:40:18.558468  102947 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:40:18.558538  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:40:18.578938  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:40:18.606098  102947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:40:18.738984  102947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:40:18.861135  102947 docker.go:234] disabling docker service ...
	I0919 22:40:18.861295  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:40:18.889797  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:40:18.903559  102947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:40:19.020834  102947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:40:19.210102  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:40:19.253298  102947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:40:19.294451  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:40:19.314809  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:40:19.329896  102947 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:40:19.329968  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:40:19.344499  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:19.359934  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:40:19.375426  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:19.390525  102947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:40:19.405742  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:40:19.419676  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:40:19.433744  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:40:19.447497  102947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:40:19.459701  102947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:40:19.472280  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:19.590393  102947 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:40:19.844194  102947 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:40:19.844268  102947 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:40:19.848691  102947 start.go:563] Will wait 60s for crictl version
	I0919 22:40:19.848750  102947 ssh_runner.go:195] Run: which crictl
	I0919 22:40:19.852912  102947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:40:19.896612  102947 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:40:19.896665  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:19.922108  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:19.951040  102947 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:40:19.952600  102947 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:40:19.954094  102947 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:40:19.972221  102947 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:40:19.976367  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:19.988586  102947 mustload.go:65] Loading cluster: ha-326307
	I0919 22:40:19.988826  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:19.989048  102947 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:40:20.009691  102947 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:40:20.009938  102947 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.3
	I0919 22:40:20.009958  102947 certs.go:194] generating shared ca certs ...
	I0919 22:40:20.009977  102947 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:20.010097  102947 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:40:20.010186  102947 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:40:20.010200  102947 certs.go:256] generating profile certs ...
	I0919 22:40:20.010274  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:40:20.010317  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.d9fee4c2
	I0919 22:40:20.010351  102947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:40:20.010361  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:40:20.010388  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:40:20.010403  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:40:20.010415  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:40:20.010427  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:40:20.010440  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:40:20.010451  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:40:20.010463  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:40:20.010507  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:40:20.010541  102947 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:40:20.010552  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:40:20.010572  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:40:20.010593  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:40:20.010613  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:40:20.010656  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:20.010681  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:40:20.010696  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:20.010706  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:40:20.010750  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:20.034999  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:20.130696  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:40:20.137701  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:40:20.181406  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:40:20.188123  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:40:20.209898  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:40:20.217560  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:40:20.265391  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:40:20.271849  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:40:20.306378  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:40:20.313419  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:40:20.338279  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:40:20.344910  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:40:20.368606  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:40:20.417189  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:40:20.473868  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:40:20.554542  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:40:20.629092  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:40:20.678888  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:40:20.722550  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:40:20.778639  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:40:20.828112  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:40:20.884904  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:40:20.936206  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:40:20.979746  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:40:21.011968  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:40:21.037922  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:40:21.058425  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:40:21.078533  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:40:21.099029  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:40:21.125522  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:40:21.151265  102947 ssh_runner.go:195] Run: openssl version
	I0919 22:40:21.157938  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:40:21.169944  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:40:21.174243  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:40:21.174339  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:40:21.182194  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:40:21.195623  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:40:21.210343  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:21.216012  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:21.216080  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:21.226359  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:40:21.239970  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:40:21.256305  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:40:21.263490  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:40:21.263550  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:40:21.274306  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:40:21.289549  102947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:40:21.294844  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:40:21.305190  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:40:21.317466  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:40:21.327473  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:40:21.337404  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:40:21.346840  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:40:21.355241  102947 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0919 22:40:21.355365  102947 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:40:21.355400  102947 kube-vip.go:115] generating kube-vip config ...
	I0919 22:40:21.355447  102947 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:40:21.372568  102947 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:40:21.372652  102947 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:40:21.372715  102947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:40:21.385812  102947 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:40:21.385902  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:40:21.396920  102947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:40:21.418422  102947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:40:21.441221  102947 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:40:21.461293  102947 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:40:21.465499  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:21.479394  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:21.609276  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:21.625324  102947 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:40:21.625678  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:21.627937  102947 out.go:179] * Verifying Kubernetes components...
	I0919 22:40:21.629432  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:21.754519  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:21.770966  102947 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:40:21.771034  102947 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:40:21.771308  102947 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m02" to be "Ready" ...
	I0919 22:40:21.780317  102947 node_ready.go:49] node "ha-326307-m02" is "Ready"
	I0919 22:40:21.780344  102947 node_ready.go:38] duration metric: took 9.008043ms for node "ha-326307-m02" to be "Ready" ...
	I0919 22:40:21.780357  102947 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:40:21.780412  102947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:40:21.794097  102947 api_server.go:72] duration metric: took 168.727042ms to wait for apiserver process to appear ...
	I0919 22:40:21.794124  102947 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:40:21.794147  102947 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:40:21.800333  102947 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:40:21.801474  102947 api_server.go:141] control plane version: v1.34.0
	I0919 22:40:21.801509  102947 api_server.go:131] duration metric: took 7.377354ms to wait for apiserver health ...
	I0919 22:40:21.801520  102947 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:40:21.810182  102947 system_pods.go:59] 24 kube-system pods found
	I0919 22:40:21.810226  102947 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:21.810244  102947 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:21.810254  102947 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:21.810262  102947 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:21.810268  102947 system_pods.go:61] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running
	I0919 22:40:21.810276  102947 system_pods.go:61] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:40:21.810281  102947 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:40:21.810292  102947 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:40:21.810300  102947 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:21.810311  102947 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:21.810315  102947 system_pods.go:61] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running
	I0919 22:40:21.810325  102947 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:21.810332  102947 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:21.810336  102947 system_pods.go:61] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running
	I0919 22:40:21.810340  102947 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:40:21.810344  102947 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:40:21.810348  102947 system_pods.go:61] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:40:21.810353  102947 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:21.810361  102947 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:21.810365  102947 system_pods.go:61] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running
	I0919 22:40:21.810369  102947 system_pods.go:61] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:40:21.810372  102947 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:40:21.810375  102947 system_pods.go:61] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:40:21.810378  102947 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:40:21.810383  102947 system_pods.go:74] duration metric: took 8.856915ms to wait for pod list to return data ...
	I0919 22:40:21.810390  102947 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:40:21.813818  102947 default_sa.go:45] found service account: "default"
	I0919 22:40:21.813853  102947 default_sa.go:55] duration metric: took 3.458375ms for default service account to be created ...
	I0919 22:40:21.813864  102947 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:40:21.820987  102947 system_pods.go:86] 24 kube-system pods found
	I0919 22:40:21.821019  102947 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:21.821027  102947 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:21.821034  102947 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:21.821040  102947 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:21.821044  102947 system_pods.go:89] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running
	I0919 22:40:21.821048  102947 system_pods.go:89] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:40:21.821051  102947 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:40:21.821054  102947 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:40:21.821059  102947 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:21.821064  102947 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:21.821068  102947 system_pods.go:89] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running
	I0919 22:40:21.821074  102947 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:21.821079  102947 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:21.821083  102947 system_pods.go:89] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running
	I0919 22:40:21.821087  102947 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:40:21.821090  102947 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:40:21.821095  102947 system_pods.go:89] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:40:21.821100  102947 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:21.821107  102947 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:21.821114  102947 system_pods.go:89] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running
	I0919 22:40:21.821118  102947 system_pods.go:89] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:40:21.821121  102947 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:40:21.821124  102947 system_pods.go:89] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:40:21.821127  102947 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:40:21.821133  102947 system_pods.go:126] duration metric: took 7.263023ms to wait for k8s-apps to be running ...
	I0919 22:40:21.821142  102947 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:40:21.821209  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:40:21.835069  102947 system_svc.go:56] duration metric: took 13.918083ms WaitForService to wait for kubelet
	I0919 22:40:21.835096  102947 kubeadm.go:578] duration metric: took 209.729975ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:40:21.835114  102947 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:40:21.839112  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:21.839140  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:21.839183  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:21.839191  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:21.839198  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:21.839203  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:21.839208  102947 node_conditions.go:105] duration metric: took 4.090003ms to run NodePressure ...
	I0919 22:40:21.839223  102947 start.go:241] waiting for startup goroutines ...
	I0919 22:40:21.839260  102947 start.go:255] writing updated cluster config ...
	I0919 22:40:21.841908  102947 out.go:203] 
	I0919 22:40:21.843889  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:21.844011  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:21.846125  102947 out.go:179] * Starting "ha-326307-m03" control-plane node in "ha-326307" cluster
	I0919 22:40:21.848304  102947 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:40:21.850127  102947 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:40:21.851602  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:21.851635  102947 cache.go:58] Caching tarball of preloaded images
	I0919 22:40:21.851746  102947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:40:21.851778  102947 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:40:21.851789  102947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:40:21.851912  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:21.876321  102947 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:40:21.876341  102947 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:40:21.876357  102947 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:40:21.876378  102947 start.go:360] acquireMachinesLock for ha-326307-m03: {Name:mk07818636650a6efffb19d787e84a34d6f1dd98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:40:21.876432  102947 start.go:364] duration metric: took 39.311µs to acquireMachinesLock for "ha-326307-m03"
	I0919 22:40:21.876450  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:40:21.876473  102947 fix.go:54] fixHost starting: m03
	I0919 22:40:21.876688  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:40:21.896238  102947 fix.go:112] recreateIfNeeded on ha-326307-m03: state=Stopped err=<nil>
	W0919 22:40:21.896264  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:40:21.898402  102947 out.go:252] * Restarting existing docker container for "ha-326307-m03" ...
	I0919 22:40:21.898493  102947 cli_runner.go:164] Run: docker start ha-326307-m03
	I0919 22:40:22.169027  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m03 --format={{.State.Status}}
	I0919 22:40:22.190097  102947 kic.go:430] container "ha-326307-m03" state is running.
	I0919 22:40:22.190500  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:40:22.212272  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:22.212572  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:40:22.212637  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:22.233877  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:22.234093  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32829 <nil> <nil>}
	I0919 22:40:22.234104  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:40:22.234859  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37302->127.0.0.1:32829: read: connection reset by peer
	I0919 22:40:25.378797  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:40:25.378831  102947 ubuntu.go:182] provisioning hostname "ha-326307-m03"
	I0919 22:40:25.378898  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:25.414501  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:25.414938  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32829 <nil> <nil>}
	I0919 22:40:25.415073  102947 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m03 && echo "ha-326307-m03" | sudo tee /etc/hostname
	I0919 22:40:25.588850  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m03
	
	I0919 22:40:25.588948  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:25.610247  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:25.610522  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32829 <nil> <nil>}
	I0919 22:40:25.610550  102947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:40:25.754732  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:40:25.754765  102947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:40:25.754794  102947 ubuntu.go:190] setting up certificates
	I0919 22:40:25.754806  102947 provision.go:84] configureAuth start
	I0919 22:40:25.754866  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:40:25.775758  102947 provision.go:143] copyHostCerts
	I0919 22:40:25.775814  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:25.775859  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:40:25.775876  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:40:25.775969  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:40:25.776130  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:25.776178  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:40:25.776185  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:40:25.776236  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:40:25.776312  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:25.776338  102947 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:40:25.776347  102947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:40:25.776387  102947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:40:25.776465  102947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m03 san=[127.0.0.1 192.168.49.4 ha-326307-m03 localhost minikube]
	I0919 22:40:25.957556  102947 provision.go:177] copyRemoteCerts
	I0919 22:40:25.957614  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:40:25.957661  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:25.977125  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.075851  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:40:26.075925  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:40:26.103453  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:40:26.103525  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:40:26.130922  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:40:26.130993  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:40:26.158446  102947 provision.go:87] duration metric: took 403.627341ms to configureAuth
	I0919 22:40:26.158474  102947 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:40:26.158684  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:26.158696  102947 machine.go:96] duration metric: took 3.94610996s to provisionDockerMachine
	I0919 22:40:26.158706  102947 start.go:293] postStartSetup for "ha-326307-m03" (driver="docker")
	I0919 22:40:26.158718  102947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:40:26.158769  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:40:26.158815  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:26.177219  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.277051  102947 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:40:26.280902  102947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:40:26.280935  102947 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:40:26.280943  102947 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:40:26.280949  102947 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:40:26.280960  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:40:26.281017  102947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:40:26.281085  102947 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:40:26.281094  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:40:26.281219  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:40:26.291493  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:26.319669  102947 start.go:296] duration metric: took 160.947592ms for postStartSetup
	I0919 22:40:26.319764  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:40:26.319819  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:26.340008  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.438911  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:40:26.444573  102947 fix.go:56] duration metric: took 4.568092826s for fixHost
	I0919 22:40:26.444606  102947 start.go:83] releasing machines lock for "ha-326307-m03", held for 4.568161658s
	I0919 22:40:26.444685  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m03
	I0919 22:40:26.470387  102947 out.go:179] * Found network options:
	I0919 22:40:26.472070  102947 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:40:26.473856  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:40:26.473888  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:40:26.473917  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:40:26.473931  102947 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:40:26.474012  102947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:40:26.474058  102947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:40:26.474062  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:26.474114  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m03
	I0919 22:40:26.500808  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.503237  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m03/id_rsa Username:docker}
	I0919 22:40:26.708883  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:40:26.738637  102947 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:40:26.738718  102947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:40:26.752845  102947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:40:26.752872  102947 start.go:495] detecting cgroup driver to use...
	I0919 22:40:26.752907  102947 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:40:26.752955  102947 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:40:26.771737  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:40:26.788372  102947 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:40:26.788434  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:40:26.810086  102947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:40:26.828338  102947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:40:26.983767  102947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:40:27.150072  102947 docker.go:234] disabling docker service ...
	I0919 22:40:27.150147  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:40:27.173008  102947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:40:27.193344  102947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:40:27.317738  102947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:40:27.460983  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:40:27.485592  102947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:40:27.507890  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:40:27.520044  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:40:27.534512  102947 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:40:27.534574  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:40:27.548984  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:27.562483  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:40:27.577519  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:40:27.592117  102947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:40:27.604075  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:40:27.616958  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:40:27.631964  102947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:40:27.646292  102947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:40:27.658210  102947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:40:27.672336  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:27.803893  102947 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:40:28.062245  102947 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:40:28.062313  102947 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:40:28.066699  102947 start.go:563] Will wait 60s for crictl version
	I0919 22:40:28.066771  102947 ssh_runner.go:195] Run: which crictl
	I0919 22:40:28.071489  102947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:40:28.109371  102947 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:40:28.109444  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:28.135369  102947 ssh_runner.go:195] Run: containerd --version
	I0919 22:40:28.166192  102947 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:40:28.167830  102947 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:40:28.169229  102947 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:40:28.170416  102947 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:40:28.189509  102947 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:40:28.193804  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:28.206515  102947 mustload.go:65] Loading cluster: ha-326307
	I0919 22:40:28.206800  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:28.207069  102947 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:40:28.226787  102947 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:40:28.227094  102947 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.4
	I0919 22:40:28.227201  102947 certs.go:194] generating shared ca certs ...
	I0919 22:40:28.227247  102947 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:40:28.227424  102947 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:40:28.227487  102947 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:40:28.227504  102947 certs.go:256] generating profile certs ...
	I0919 22:40:28.227586  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:40:28.227634  102947 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.95aca604
	I0919 22:40:28.227713  102947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:40:28.227730  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:40:28.227749  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:40:28.227764  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:40:28.227783  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:40:28.227800  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:40:28.227819  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:40:28.227839  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:40:28.227862  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:40:28.227929  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:40:28.227971  102947 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:40:28.227984  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:40:28.228019  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:40:28.228051  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:40:28.228082  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:40:28.228166  102947 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:40:28.228213  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:40:28.228239  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:40:28.228259  102947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:28.228383  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:40:28.247785  102947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32819 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:40:28.336571  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:40:28.341071  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:40:28.354226  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:40:28.358563  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:40:28.373723  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:40:28.378406  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:40:28.394406  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:40:28.399415  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:40:28.416091  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:40:28.420161  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:40:28.435710  102947 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:40:28.439831  102947 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:40:28.454973  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:40:28.488291  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:40:28.520386  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:40:28.548878  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:40:28.577674  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:40:28.606894  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:40:28.635467  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:40:28.664035  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:40:28.692528  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:40:28.721969  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:40:28.750129  102947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:40:28.777226  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:40:28.798416  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:40:28.818429  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:40:28.844040  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:40:28.875418  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:40:28.898298  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:40:28.918961  102947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:40:28.940259  102947 ssh_runner.go:195] Run: openssl version
	I0919 22:40:28.946752  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:40:28.959425  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:28.964456  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:28.964528  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:40:28.973714  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:40:28.984876  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:40:28.996258  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:40:29.000541  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:40:29.000605  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:40:29.008599  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:40:29.018788  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:40:29.030314  102947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:40:29.034634  102947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:40:29.034700  102947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:40:29.042685  102947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:40:29.052467  102947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:40:29.056255  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:40:29.063105  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:40:29.071819  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:40:29.079410  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:40:29.086705  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:40:29.094001  102947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:40:29.101257  102947 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 containerd true true} ...
	I0919 22:40:29.101378  102947 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:40:29.101410  102947 kube-vip.go:115] generating kube-vip config ...
	I0919 22:40:29.101456  102947 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:40:29.115062  102947 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:40:29.115120  102947 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:40:29.115184  102947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:40:29.124866  102947 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:40:29.124920  102947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:40:29.135111  102947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:40:29.156313  102947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:40:29.177045  102947 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:40:29.198544  102947 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:40:29.203037  102947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:40:29.216695  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:29.333585  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:29.349312  102947 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:40:29.349626  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:29.352738  102947 out.go:179] * Verifying Kubernetes components...
	I0919 22:40:29.354445  102947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:40:29.474185  102947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:40:29.488500  102947 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:40:29.488573  102947 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:40:29.488783  102947 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m03" to be "Ready" ...
	I0919 22:40:29.492092  102947 node_ready.go:49] node "ha-326307-m03" is "Ready"
	I0919 22:40:29.492121  102947 node_ready.go:38] duration metric: took 3.321791ms for node "ha-326307-m03" to be "Ready" ...
	I0919 22:40:29.492134  102947 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:40:29.492205  102947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:40:29.506850  102947 api_server.go:72] duration metric: took 157.484065ms to wait for apiserver process to appear ...
	I0919 22:40:29.506886  102947 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:40:29.506910  102947 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:40:29.511130  102947 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:40:29.512015  102947 api_server.go:141] control plane version: v1.34.0
	I0919 22:40:29.512036  102947 api_server.go:131] duration metric: took 5.141712ms to wait for apiserver health ...
	I0919 22:40:29.512043  102947 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:40:29.518744  102947 system_pods.go:59] 24 kube-system pods found
	I0919 22:40:29.518774  102947 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:29.518782  102947 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:29.518787  102947 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:40:29.518791  102947 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:40:29.518796  102947 system_pods.go:61] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:29.518800  102947 system_pods.go:61] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:40:29.518804  102947 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:40:29.518807  102947 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:40:29.518810  102947 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:40:29.518813  102947 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:40:29.518819  102947 system_pods.go:61] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:29.518822  102947 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:40:29.518828  102947 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:29.518858  102947 system_pods.go:61] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:29.518862  102947 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:40:29.518868  102947 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:40:29.518873  102947 system_pods.go:61] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:40:29.518879  102947 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:40:29.518884  102947 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:29.518888  102947 system_pods.go:61] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:29.518894  102947 system_pods.go:61] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:40:29.518897  102947 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:40:29.518900  102947 system_pods.go:61] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:40:29.518905  102947 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:40:29.518910  102947 system_pods.go:74] duration metric: took 6.861836ms to wait for pod list to return data ...
	I0919 22:40:29.518919  102947 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:40:29.521697  102947 default_sa.go:45] found service account: "default"
	I0919 22:40:29.521719  102947 default_sa.go:55] duration metric: took 2.795273ms for default service account to be created ...
	I0919 22:40:29.521728  102947 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:40:29.527102  102947 system_pods.go:86] 24 kube-system pods found
	I0919 22:40:29.527136  102947 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:29.527144  102947 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:40:29.527166  102947 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running
	I0919 22:40:29.527174  102947 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running
	I0919 22:40:29.527181  102947 system_pods.go:89] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:40:29.527186  102947 system_pods.go:89] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:40:29.527195  102947 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:40:29.527200  102947 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:40:29.527209  102947 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running
	I0919 22:40:29.527214  102947 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running
	I0919 22:40:29.527224  102947 system_pods.go:89] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:40:29.527233  102947 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running
	I0919 22:40:29.527244  102947 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:29.527251  102947 system_pods.go:89] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:40:29.527259  102947 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:40:29.527265  102947 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:40:29.527274  102947 system_pods.go:89] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:40:29.527282  102947 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running
	I0919 22:40:29.527293  102947 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:29.527304  102947 system_pods.go:89] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:40:29.527311  102947 system_pods.go:89] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:40:29.527318  102947 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:40:29.527326  102947 system_pods.go:89] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:40:29.527331  102947 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:40:29.527342  102947 system_pods.go:126] duration metric: took 5.60777ms to wait for k8s-apps to be running ...
	I0919 22:40:29.527353  102947 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:40:29.527418  102947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:40:29.540084  102947 system_svc.go:56] duration metric: took 12.720236ms WaitForService to wait for kubelet
	I0919 22:40:29.540114  102947 kubeadm.go:578] duration metric: took 190.753677ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:40:29.540138  102947 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:40:29.543938  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:29.543961  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:29.543977  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:29.543981  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:29.543985  102947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:40:29.543988  102947 node_conditions.go:123] node cpu capacity is 8
	I0919 22:40:29.543992  102947 node_conditions.go:105] duration metric: took 3.848698ms to run NodePressure ...
	I0919 22:40:29.544002  102947 start.go:241] waiting for startup goroutines ...
	I0919 22:40:29.544021  102947 start.go:255] writing updated cluster config ...
	I0919 22:40:29.546124  102947 out.go:203] 
	I0919 22:40:29.547729  102947 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:40:29.547827  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:29.549464  102947 out.go:179] * Starting "ha-326307-m04" worker node in "ha-326307" cluster
	I0919 22:40:29.551423  102947 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:40:29.552959  102947 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:40:29.554347  102947 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:40:29.554374  102947 cache.go:58] Caching tarball of preloaded images
	I0919 22:40:29.554466  102947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:40:29.554528  102947 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:40:29.554544  102947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:40:29.554661  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:29.576604  102947 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:40:29.576623  102947 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:40:29.576636  102947 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:40:29.576658  102947 start.go:360] acquireMachinesLock for ha-326307-m04: {Name:mk65e4f546dae46b7c7a6cfe6f590e09a0a01676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:40:29.576722  102947 start.go:364] duration metric: took 36.867µs to acquireMachinesLock for "ha-326307-m04"
	I0919 22:40:29.576740  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:40:29.576747  102947 fix.go:54] fixHost starting: m04
	I0919 22:40:29.576991  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:40:29.599524  102947 fix.go:112] recreateIfNeeded on ha-326307-m04: state=Stopped err=<nil>
	W0919 22:40:29.599554  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:40:29.601341  102947 out.go:252] * Restarting existing docker container for "ha-326307-m04" ...
	I0919 22:40:29.601436  102947 cli_runner.go:164] Run: docker start ha-326307-m04
	I0919 22:40:29.856928  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:40:29.877141  102947 kic.go:430] container "ha-326307-m04" state is running.
	I0919 22:40:29.877564  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m04
	I0919 22:40:29.898099  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:40:29.898353  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:40:29.898408  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	I0919 22:40:29.919242  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:40:29.919493  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32834 <nil> <nil>}
	I0919 22:40:29.919509  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:40:29.920238  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53392->127.0.0.1:32834: read: connection reset by peer
	I0919 22:40:32.921592  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:35.923978  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:38.925460  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:41.925968  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:44.927435  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:47.928879  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:50.930439  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:53.931750  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:56.932223  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:40:59.933541  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:02.934449  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:05.936468  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:08.938720  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:11.939132  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:14.940311  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:17.941338  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:20.943720  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:23.944321  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:26.945127  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:29.946482  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:32.947311  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:35.949504  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:38.950829  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:41.951282  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:44.951718  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:47.952886  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:50.954501  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:53.955026  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:56.955566  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:41:59.956458  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:02.958263  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:05.960452  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:08.960827  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:11.961991  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:14.963364  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:17.964467  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:20.966794  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:23.967257  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:26.968419  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:29.969450  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:32.970449  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:35.972383  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:38.974402  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:41.974947  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:44.975961  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:47.977119  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:50.979045  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:53.979535  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:56.980106  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:42:59.981632  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:02.983145  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:05.985114  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:08.987742  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:11.988246  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:14.988636  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:17.990247  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:20.990690  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:23.991025  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:26.992363  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32834: connect: connection refused
	I0919 22:43:29.994267  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:43:29.994298  102947 ubuntu.go:182] provisioning hostname "ha-326307-m04"
	I0919 22:43:29.994384  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:30.014799  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:30.014894  102947 machine.go:96] duration metric: took 3m0.116525554s to provisionDockerMachine
	I0919 22:43:30.014980  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:43:30.015024  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:30.033859  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:30.033976  102947 retry.go:31] will retry after 180.600333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:30.215391  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:30.234687  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:30.234800  102947 retry.go:31] will retry after 396.872897ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:30.632462  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:30.651421  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:30.651553  102947 retry.go:31] will retry after 330.021621ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:30.982141  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:31.001874  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:31.001981  102947 retry.go:31] will retry after 902.78257ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:31.905550  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:31.924562  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:43:31.924688  102947 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:31.924702  102947 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:31.924747  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:43:31.924776  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:31.944532  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:31.944644  102947 retry.go:31] will retry after 370.439297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:32.316311  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:32.335705  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:32.335801  102947 retry.go:31] will retry after 471.735503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:32.808402  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:32.828725  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:43:32.828845  102947 retry.go:31] will retry after 653.918581ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:33.483771  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:43:33.505126  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:43:33.505274  102947 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:33.505310  102947 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:33.505321  102947 fix.go:56] duration metric: took 3m3.928573811s for fixHost
	I0919 22:43:33.505333  102947 start.go:83] releasing machines lock for "ha-326307-m04", held for 3m3.928601896s
	W0919 22:43:33.505353  102947 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:33.505432  102947 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:33.505457  102947 start.go:729] Will try again in 5 seconds ...
	I0919 22:43:38.507265  102947 start.go:360] acquireMachinesLock for ha-326307-m04: {Name:mk65e4f546dae46b7c7a6cfe6f590e09a0a01676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:43:38.507371  102947 start.go:364] duration metric: took 72.258µs to acquireMachinesLock for "ha-326307-m04"
	I0919 22:43:38.507394  102947 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:43:38.507402  102947 fix.go:54] fixHost starting: m04
	I0919 22:43:38.507660  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:43:38.526017  102947 fix.go:112] recreateIfNeeded on ha-326307-m04: state=Stopped err=<nil>
	W0919 22:43:38.526047  102947 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:43:38.528104  102947 out.go:252] * Restarting existing docker container for "ha-326307-m04" ...
	I0919 22:43:38.528195  102947 cli_runner.go:164] Run: docker start ha-326307-m04
	I0919 22:43:38.792918  102947 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:43:38.812750  102947 kic.go:430] container "ha-326307-m04" state is running.
	I0919 22:43:38.813122  102947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m04
	I0919 22:43:38.835015  102947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:43:38.835331  102947 machine.go:93] provisionDockerMachine start ...
	I0919 22:43:38.835404  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	I0919 22:43:38.855863  102947 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:38.856092  102947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32839 <nil> <nil>}
	I0919 22:43:38.856104  102947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:43:38.856765  102947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33486->127.0.0.1:32839: read: connection reset by peer
	I0919 22:43:41.857087  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:44.857460  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:47.858230  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:50.860407  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:53.860840  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:56.862141  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:43:59.863585  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:02.864745  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:05.867376  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:08.869862  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:11.870894  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:14.871487  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:17.872736  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:20.874506  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:23.875596  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:26.875979  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:29.877435  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:32.878977  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:35.881595  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:38.883657  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:41.884099  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:44.885281  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:47.887113  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:50.889449  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:53.889898  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:56.891131  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:44:59.893426  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:02.895108  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:05.896902  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:08.899087  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:11.900184  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:14.901096  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:17.902201  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:20.904503  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:23.904962  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:26.906198  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:29.908575  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:32.910119  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:35.912526  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:38.914521  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:41.915090  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:44.916505  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:47.917924  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:50.919469  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:53.919814  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:56.920315  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:45:59.922657  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:02.924190  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:05.926504  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:08.928432  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:11.929228  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:14.930499  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:17.931536  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:20.934030  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:23.934965  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:26.936258  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:29.938459  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:32.939438  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:35.941457  102947 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32839: connect: connection refused
	I0919 22:46:38.943814  102947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:46:38.943857  102947 ubuntu.go:182] provisioning hostname "ha-326307-m04"
	I0919 22:46:38.943941  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:38.964275  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:38.964337  102947 machine.go:96] duration metric: took 3m0.128991371s to provisionDockerMachine
	I0919 22:46:38.964416  102947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:46:38.964451  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:38.983816  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:38.983960  102947 retry.go:31] will retry after 364.420464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:39.349386  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:39.369081  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:39.369225  102947 retry.go:31] will retry after 206.788026ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:39.576720  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:39.596502  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:39.596609  102947 retry.go:31] will retry after 511.892744ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:40.109367  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:40.129534  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:40.129648  102947 retry.go:31] will retry after 811.778179ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:40.941718  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:40.962501  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:46:40.962610  102947 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:46:40.962628  102947 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:40.962672  102947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:46:40.962701  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:40.983319  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:40.983479  102947 retry.go:31] will retry after 310.783714ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:41.295059  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:41.314519  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:41.314654  102947 retry.go:31] will retry after 532.410728ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:41.847306  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:41.866776  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	I0919 22:46:41.866902  102947 retry.go:31] will retry after 498.480272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:42.366422  102947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	W0919 22:46:42.388450  102947 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04 returned with exit code 1
	W0919 22:46:42.388595  102947 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:46:42.388613  102947 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:42.388623  102947 fix.go:56] duration metric: took 3m3.881222347s for fixHost
	I0919 22:46:42.388631  102947 start.go:83] releasing machines lock for "ha-326307-m04", held for 3m3.881250201s
	W0919 22:46:42.388708  102947 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-326307" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:46:42.391386  102947 out.go:203] 
	W0919 22:46:42.393146  102947 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:46:42.393190  102947 out.go:285] * 
	W0919 22:46:42.395039  102947 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 22:46:42.396646  102947 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb7d0d80b9c23       6e38f40d628db       5 minutes ago       Running             storage-provisioner       2                   a66e01a465731       storage-provisioner
	fea1c0534d95d       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   c6c63e662186b       kindnet-gxnzs
	fff949799c16f       52546a367cc9e       6 minutes ago       Running             coredns                   1                   d66fcc49f8eef       coredns-66bc5c9577-wqvzd
	9b01ee2966e08       52546a367cc9e       6 minutes ago       Running             coredns                   1                   8915a954c3a5e       coredns-66bc5c9577-9j5pw
	471e8ec48d678       8c811b4aec35f       6 minutes ago       Running             busybox                   1                   4242a65c0c92e       busybox-7b57f96db7-m8swj
	a7d6081c4523a       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       1                   a66e01a465731       storage-provisioner
	c1e4cc3b9a7f1       df0860106674d       6 minutes ago       Running             kube-proxy                1                   bb87d6f8210e1       kube-proxy-8kxtv
	83bc1a5b44143       765655ea60781       6 minutes ago       Running             kube-vip                  0                   8124d18d08f1c       kube-vip-ha-326307
	63dc43f0224fa       46169d968e920       6 minutes ago       Running             kube-scheduler            1                   b84e223a297e4       kube-scheduler-ha-326307
	7a855457ed99a       a0af72f2ec6d6       6 minutes ago       Running             kube-controller-manager   1                   35b9028490f76       kube-controller-manager-ha-326307
	c543ffd76b85c       5f1f5298c888d       6 minutes ago       Running             etcd                      1                   a85600718119d       etcd-ha-326307
	e1a181d28b52f       90550c43ad2bc       6 minutes ago       Running             kube-apiserver            1                   4ff7be1cea576       kube-apiserver-ha-326307
	7791f71e5d5a5       8c811b4aec35f       21 minutes ago      Exited              busybox                   0                   b5e0c0fffea25       busybox-7b57f96db7-m8swj
	ca68bbc020e20       52546a367cc9e       23 minutes ago      Exited              coredns                   0                   132023f334782       coredns-66bc5c9577-9j5pw
	1f618dc8f0392       52546a367cc9e       23 minutes ago      Exited              coredns                   0                   a5ac32b4949ab       coredns-66bc5c9577-wqvzd
	365cc00c2e009       409467f978b4a       23 minutes ago      Exited              kindnet-cni               0                   96e027ec2b5fb       kindnet-gxnzs
	bd9e41958ffbb       df0860106674d       23 minutes ago      Exited              kube-proxy                0                   06da62af16945       kube-proxy-8kxtv
	456a0c3cbf5ce       46169d968e920       23 minutes ago      Exited              kube-scheduler            0                   f02b9e82ff9b1       kube-scheduler-ha-326307
	05ab0247624a7       a0af72f2ec6d6       23 minutes ago      Exited              kube-controller-manager   0                   6026f58e8c23a       kube-controller-manager-ha-326307
	e5c59a6abe977       5f1f5298c888d       23 minutes ago      Exited              etcd                      0                   5f89382a468ad       etcd-ha-326307
	e80d65e3c7c18       90550c43ad2bc       23 minutes ago      Exited              kube-apiserver            0                   3813626701bd1       kube-apiserver-ha-326307
	
	
	==> containerd <==
	Sep 19 22:40:50 ha-326307 containerd[478]: time="2025-09-19T22:40:50.496292846Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 19 22:40:50 ha-326307 containerd[478]: time="2025-09-19T22:40:50.941042111Z" level=info msg="RemoveContainer for \"f52d2d9f5881b5d50f95a3aeef2c876d51dd6b2be6c464a7464b26e8175b0fc6\""
	Sep 19 22:40:50 ha-326307 containerd[478]: time="2025-09-19T22:40:50.945894995Z" level=info msg="RemoveContainer for \"f52d2d9f5881b5d50f95a3aeef2c876d51dd6b2be6c464a7464b26e8175b0fc6\" returns successfully"
	Sep 19 22:41:02 ha-326307 containerd[478]: time="2025-09-19T22:41:02.735151860Z" level=info msg="CreateContainer within sandbox \"a66e01a46573183df8e2c6c041cfc03b30ce85291f726f529b293f52bca48ac9\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
	Sep 19 22:41:02 ha-326307 containerd[478]: time="2025-09-19T22:41:02.750197533Z" level=info msg="CreateContainer within sandbox \"a66e01a46573183df8e2c6c041cfc03b30ce85291f726f529b293f52bca48ac9\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"bb7d0d80b9c2303a244cfffc060085bd15528b57546277cdaaaf2dd707b68f1f\""
	Sep 19 22:41:02 ha-326307 containerd[478]: time="2025-09-19T22:41:02.750866519Z" level=info msg="StartContainer for \"bb7d0d80b9c2303a244cfffc060085bd15528b57546277cdaaaf2dd707b68f1f\""
	Sep 19 22:41:02 ha-326307 containerd[478]: time="2025-09-19T22:41:02.809028664Z" level=info msg="StartContainer for \"bb7d0d80b9c2303a244cfffc060085bd15528b57546277cdaaaf2dd707b68f1f\" returns successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.721548399Z" level=info msg="RemoveContainer for \"d1c181558b58c3ebc7dae5df97b80119a8c6d18dc19b0532b61999c4db0c6668\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.726063631Z" level=info msg="RemoveContainer for \"d1c181558b58c3ebc7dae5df97b80119a8c6d18dc19b0532b61999c4db0c6668\" returns successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.728293194Z" level=info msg="StopPodSandbox for \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.728427999Z" level=info msg="TearDown network for sandbox \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\" successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.728450762Z" level=info msg="StopPodSandbox for \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\" returns successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.728930508Z" level=info msg="RemovePodSandbox for \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.728969583Z" level=info msg="Forcibly stopping sandbox \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.729045579Z" level=info msg="TearDown network for sandbox \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\" successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.733274152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.733381747Z" level=info msg="RemovePodSandbox \"7b77cca917bf43aec641dd376698b1ab498cba7fff13a11d50260c5ec9578bca\" returns successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734017576Z" level=info msg="StopPodSandbox for \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734138515Z" level=info msg="TearDown network for sandbox \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\" successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734174247Z" level=info msg="StopPodSandbox for \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\" returns successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734599814Z" level=info msg="RemovePodSandbox for \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734628547Z" level=info msg="Forcibly stopping sandbox \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\""
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.734699211Z" level=info msg="TearDown network for sandbox \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\" successfully"
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.738452443Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 19 22:41:12 ha-326307 containerd[478]: time="2025-09-19T22:41:12.738554754Z" level=info msg="RemovePodSandbox \"5717652da0ef4a695e109e98c5ea40ceebc13c17b1fbf8314725c9f0fc38f80b\" returns successfully"
	
	
	==> coredns [1f618dc8f039242512f0147a2a38ee8cc0d3d5730c44724d6fc5d4c498121cd6] <==
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54337 - 24572 "HINFO IN 5143313645322175939.5313042790825403134. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069732464s
	[INFO] 10.244.0.4:35490 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000326279s
	[INFO] 10.244.0.4:55330 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.014239882s
	[INFO] 10.244.1.2:39628 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210602s
	[INFO] 10.244.1.2:46891 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.001261026s
	[INFO] 10.244.1.2:43124 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.00098216s
	[INFO] 10.244.1.2:49555 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.00013424s
	[INFO] 10.244.0.4:40362 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00024135s
	[INFO] 10.244.0.4:45629 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168694s
	[INFO] 10.244.1.2:52354 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189457s
	[INFO] 10.244.1.2:43857 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161715s
	[INFO] 10.244.1.2:51922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145764s
	[INFO] 10.244.1.2:57320 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009888497s
	[INFO] 10.244.1.2:49841 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169285s
	[INFO] 10.244.0.4:51548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159656s
	[INFO] 10.244.0.4:48681 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110507s
	[INFO] 10.244.1.2:52993 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137337s
	[INFO] 10.244.0.4:59855 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113524s
	[INFO] 10.244.1.2:56284 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188608s
	[INFO] 10.244.1.2:58675 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149085s
	[INFO] 10.244.1.2:38911 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131505s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9b01ee2966e081085b732d62e68985fd9249574188499e7e99fa53ff3e585c2d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35530 - 6163 "HINFO IN 6373030861249236477.4474115650148028833. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02205233s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [ca68bbc020e2091cdd81beb73e5c446a19425f555a16039acec158683b396c93] <==
	[INFO] 127.0.0.1:49588 - 50300 "HINFO IN 9047056621409016881.2982736294753326061. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063128768s
	[INFO] 10.244.0.4:39328 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.014249598s
	[INFO] 10.244.0.4:59759 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.013332957s
	[INFO] 10.244.0.4:50336 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.008865788s
	[INFO] 10.244.1.2:42753 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000158745s
	[INFO] 10.244.0.4:52334 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261159s
	[INFO] 10.244.0.4:43558 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010645205s
	[INFO] 10.244.0.4:51059 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154122s
	[INFO] 10.244.0.4:46147 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012594143s
	[INFO] 10.244.0.4:39163 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143755s
	[INFO] 10.244.0.4:57061 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014731s
	[INFO] 10.244.1.2:59502 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000129746s
	[INFO] 10.244.1.2:49570 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172915s
	[INFO] 10.244.1.2:48519 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000175653s
	[INFO] 10.244.0.4:50569 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000326714s
	[INFO] 10.244.0.4:45465 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000234038s
	[INFO] 10.244.1.2:52569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176154s
	[INFO] 10.244.1.2:36719 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000205481s
	[INFO] 10.244.1.2:58705 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195468s
	[INFO] 10.244.0.4:38035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216169s
	[INFO] 10.244.0.4:52287 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129599s
	[INFO] 10.244.0.4:37285 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000186803s
	[INFO] 10.244.1.2:39163 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165716s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fff949799c16ffb392a665b0e5af2f326948a468e2495b8ea2fa176e06b5cfbf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60701 - 36326 "HINFO IN 1706815658337671432.2830354807318160675. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06080012s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-326307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_23_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:23:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:46:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:45:24 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:45:24 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:45:24 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:45:24 +0000   Fri, 19 Sep 2025 22:23:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-326307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6ba0924deaa4643b45558c406a92530
	  System UUID:                9c3f30ed-68b2-4a1c-af95-9031ae210a78
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-m8swj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-66bc5c9577-9j5pw             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23m
	  kube-system                 coredns-66bc5c9577-wqvzd             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23m
	  kube-system                 etcd-ha-326307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         23m
	  kube-system                 kindnet-gxnzs                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23m
	  kube-system                 kube-apiserver-ha-326307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-326307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-8kxtv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-326307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-326307                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m35s                  kube-proxy       
	  Normal  Starting                 23m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)      kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)      kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)      kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                    kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  Starting                 23m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    23m                    kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  23m                    kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                    node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           8m16s                  node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  Starting                 6m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m44s (x8 over 6m44s)  kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m44s (x8 over 6m44s)  kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m44s (x7 over 6m44s)  kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m34s                  node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           6m34s                  node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           6m28s                  node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	
	
	Name:               ha-326307-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:46:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:43:43 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:43:43 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:43:43 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:43:43 +0000   Fri, 19 Sep 2025 22:24:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-326307-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 9fd69bf7d4de4d0cb4316de818a4daa2
	  System UUID:                8095cd89-f43b-4d8a-adef-b40d6aaa7ad2
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tfpvf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 etcd-ha-326307-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         22m
	  kube-system                 kindnet-mk6pv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22m
	  kube-system                 kube-apiserver-ha-326307-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-ha-326307-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-q8mtj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-ha-326307-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-vip-ha-326307-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 22m                    kube-proxy       
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           22m                    node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  NodeAllocatableEnforced  8m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m22s (x7 over 8m22s)  kubelet          Node ha-326307-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m22s (x8 over 8m22s)  kubelet          Node ha-326307-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m22s (x8 over 8m22s)  kubelet          Node ha-326307-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m22s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m16s                  node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  Starting                 6m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m41s (x8 over 6m41s)  kubelet          Node ha-326307-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s (x8 over 6m41s)  kubelet          Node ha-326307-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m41s (x7 over 6m41s)  kubelet          Node ha-326307-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m34s                  node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           6m34s                  node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           6m28s                  node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	
	
	==> dmesg <==
	[Sep19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001853] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001006] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.090013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.467231] i8042: Warning: Keylock active
	[  +0.009747] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004210] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001075] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000901] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000908] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001042] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001465] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.534799] block sda: the capability attribute has been deprecated.
	[  +0.099617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027269] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.089616] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6] <==
	{"level":"info","ts":"2025-09-19T22:40:24.177644Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:40:24.185512Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:40:24.185980Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.175107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:51430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:47.201772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:51452","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:46:47.211965Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 16449250771884659557)"}
	{"level":"info","ts":"2025-09-19T22:46:47.213841Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"5512420eb470d1ce","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-19T22:46:47.213908Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.213977Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:46:47.214000Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.214039Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.213998Z","caller":"etcdserver/server.go:718","msg":"rejected Raft message from removed member","local-member-id":"aec36adc501070cc","removed-member-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.214075Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2025-09-19T22:46:47.214126Z","caller":"etcdserver/server.go:718","msg":"rejected Raft message from removed member","local-member-id":"aec36adc501070cc","removed-member-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.214134Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2025-09-19T22:46:47.214052Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:46:47.214191Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.214316Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","error":"context canceled"}
	{"level":"warn","ts":"2025-09-19T22:46:47.214372Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"5512420eb470d1ce","error":"failed to read 5512420eb470d1ce on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-09-19T22:46:47.214404Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.214547Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce","error":"context canceled"}
	{"level":"info","ts":"2025-09-19T22:46:47.214582Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:46:47.214605Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"5512420eb470d1ce"}
	{"level":"info","ts":"2025-09-19T22:46:47.214619Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"5512420eb470d1ce"}
	{"level":"warn","ts":"2025-09-19T22:46:47.224066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:46740","server-name":"","error":"EOF"}
	
	
	==> etcd [e5c59a6abe97751de42afd27010936d1c3a401fad6cd730e75a1692a895b4fbc] <==
	{"level":"info","ts":"2025-09-19T22:39:52.140938Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-09-19T22:39:52.162339Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082596421131,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-19T22:39:52.340049Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"1.996479221s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-09-19T22:39:52.340124Z","caller":"traceutil/trace.go:172","msg":"trace[586308872] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"1.996568167s","start":"2025-09-19T22:39:50.343542Z","end":"2025-09-19T22:39:52.340111Z","steps":["trace[586308872] 'agreement among raft nodes before linearized reading'  (duration: 1.996477658s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:39:52.340628Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:39:50.343527Z","time spent":"1.997078725s","remote":"127.0.0.1:36004","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2025/09/19 22:39:52 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-09-19T22:39:52.496622Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:39:45.496513Z","time spent":"7.000101766s","remote":"127.0.0.1:36464","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	{"level":"warn","ts":"2025-09-19T22:39:52.664567Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082596421131,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-19T22:39:53.164691Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082596421131,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-19T22:39:53.664930Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082596421131,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-09-19T22:39:53.841224Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-19T22:39:53.841312Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-19T22:39:53.841335Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4070] sent MsgPreVote request to 5512420eb470d1ce at term 2"}
	{"level":"info","ts":"2025-09-19T22:39:53.841349Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 2, index: 4070] sent MsgPreVote request to e4477a6cd7815365 at term 2"}
	{"level":"info","ts":"2025-09-19T22:39:53.841387Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-09-19T22:39:53.841403Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-09-19T22:39:53.856629Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"10.006331529s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-09-19T22:39:53.856703Z","caller":"traceutil/trace.go:172","msg":"trace[357958415] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"10.006425985s","start":"2025-09-19T22:39:43.850264Z","end":"2025-09-19T22:39:53.856690Z","steps":["trace[357958415] 'agreement among raft nodes before linearized reading'  (duration: 10.006330214s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:39:53.856753Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:39:43.850240Z","time spent":"10.006497987s","remote":"127.0.0.1:36302","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	2025/09/19 22:39:53 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-09-19T22:39:54.165033Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082596421131,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-19T22:39:54.350624Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"1.999804258s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context deadline exceeded"}
	{"level":"info","ts":"2025-09-19T22:39:54.350972Z","caller":"traceutil/trace.go:172","msg":"trace[1511115829] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"2.00016656s","start":"2025-09-19T22:39:52.350791Z","end":"2025-09-19T22:39:54.350957Z","steps":["trace[1511115829] 'agreement among raft nodes before linearized reading'  (duration: 1.999802512s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:39:54.351034Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:39:52.350777Z","time spent":"2.000237823s","remote":"127.0.0.1:35978","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2025/09/19 22:39:54 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> kernel <==
	 22:46:56 up  1:29,  0 users,  load average: 2.36, 1.44, 1.10
	Linux ha-326307 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [365cc00c2e009eeed7e71d1202a4d406c12f0d9faee38762ba691eb2d7c71f89] <==
	I0919 22:39:10.992568       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:20.990595       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:39:20.990634       1 main.go:301] handling current node
	I0919 22:39:20.990655       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:39:20.990663       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:39:20.990874       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:39:20.990888       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:30.995276       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:39:30.995312       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:30.995572       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:39:30.995598       1 main.go:301] handling current node
	I0919 22:39:30.995611       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:39:30.995615       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:39:40.996306       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:39:40.996354       1 main.go:301] handling current node
	I0919 22:39:40.996386       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:39:40.996395       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:39:40.996628       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:39:40.996654       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:50.991728       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:39:50.991865       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:39:50.992227       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:39:50.992324       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:39:50.992803       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:39:50.992828       1 main.go:301] handling current node
	
	
	==> kindnet [fea1c0534d95d8681a40f476ef920c8ced5eb8897a63d871e66830a2e35509fc] <==
	I0919 22:46:11.327662       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:46:11.327920       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:46:11.327938       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:46:21.328030       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:46:21.328073       1 main.go:301] handling current node
	I0919 22:46:21.328087       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:46:21.328093       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:46:21.328336       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:46:21.328349       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:46:31.327485       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:46:31.327520       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:46:31.327776       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:46:31.327794       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:46:31.327897       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:46:31.327908       1 main.go:301] handling current node
	I0919 22:46:41.328117       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:46:41.328176       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:46:41.328398       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:46:41.328415       1 main.go:301] handling current node
	I0919 22:46:41.328447       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:46:41.328457       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:46:51.327464       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:46:51.327528       1 main.go:301] handling current node
	I0919 22:46:51.327543       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:46:51.327548       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5] <==
	I0919 22:40:19.279381       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	W0919 22:40:19.281370       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3 192.168.49.4]
	I0919 22:40:19.295421       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0919 22:40:19.295734       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:40:19.295813       1 policy_source.go:240] refreshing policies
	I0919 22:40:19.318977       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 22:40:19.385137       1 controller.go:667] quota admission added evaluator for: endpoints
	E0919 22:40:19.394148       1 controller.go:97] Error removing old endpoints from kubernetes service: Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:40:19.817136       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0919 22:40:20.175946       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 22:40:21.106965       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I0919 22:40:21.115392       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 22:40:22.902022       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:40:23.000359       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 22:40:23.094961       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0919 22:41:31.899871       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:41:34.521052       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:42:39.388525       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:42:45.838122       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:43:41.302570       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:44:00.530191       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:44:44.037874       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:45:10.813928       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:46:01.956836       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:46:26.916270       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [e80d65e3c7c18da87f7fa003e39382f5a4285ba4782fc295197421c6b882a161] <==
	E0919 22:39:54.523383       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.523431       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.526237       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.526320       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.522979       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527081       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527220       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527341       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527429       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527492       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527556       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.527638       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528262       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528338       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528394       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528418       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528451       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528480       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.528501       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533700       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533915       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533941       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533972       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533985       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:39:54.533997       1 watcher.go:335] watch chan error: etcdserver: no leader
	
	
	==> kube-controller-manager [05ab0247624a7f8ffa6bc948e3abc3adc49911c297291eb0a6dd42e3df39f4cd] <==
	I0919 22:23:38.744532       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:23:38.744726       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0919 22:23:38.744739       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0919 22:23:38.744729       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:23:38.744737       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:23:38.744759       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 22:23:38.745195       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 22:23:38.745255       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:23:38.746448       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:23:38.748706       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 22:23:38.750017       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.750086       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:23:38.751270       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 22:23:38.760899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 22:23:38.760926       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 22:23:38.760971       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 22:23:38.765332       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:38.771790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:08.307746       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m02\" does not exist"
	I0919 22:24:08.319829       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:24:08.699971       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m02"
	E0919 22:24:31.036531       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8ztpb failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8ztpb\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:24:31.706808       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326307-m03\" does not exist"
	I0919 22:24:31.736561       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326307-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:24:33.715916       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m03"
	
	
	==> kube-controller-manager [7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c] <==
	I0919 22:40:22.614855       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 22:40:22.616016       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0919 22:40:22.622579       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0919 22:40:22.624722       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 22:40:22.626205       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:40:22.627256       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 22:40:22.631207       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:40:22.638798       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:40:22.639864       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0919 22:40:22.639886       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:40:22.639904       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0919 22:40:22.640312       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 22:40:22.640328       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:40:22.640420       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 22:40:22.640606       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307"
	I0919 22:40:22.640638       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m03"
	I0919 22:40:22.640606       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m02"
	I0919 22:40:22.640694       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 22:40:22.946089       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-49s8d\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:40:22.946224       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e75e0609-6186-44fd-8674-15383f700490", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-49s8d": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:40:56.500901       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-49s8d\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:40:56.501810       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e75e0609-6186-44fd-8674-15383f700490", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-49s8d": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:40:57.687491       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-49s8d\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:40:57.688223       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e75e0609-6186-44fd-8674-15383f700490", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-49s8d": the object has been modified; please apply your changes to the latest version and try again
	E0919 22:46:46.068479       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-proxy [bd9e41958ffbbb27dab3d180a56fb27df36a1a1896db3a6f322a8aabcda57677] <==
	I0919 22:23:40.183862       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:23:40.251957       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:23:40.353105       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:23:40.353291       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:23:40.353503       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:23:40.383440       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:23:40.383522       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:23:40.391534       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:23:40.391999       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:23:40.392045       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:23:40.394189       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:23:40.394304       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:23:40.394470       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:23:40.394480       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:23:40.394266       1 config.go:200] "Starting service config controller"
	I0919 22:23:40.394506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:23:40.394279       1 config.go:309] "Starting node config controller"
	I0919 22:23:40.394533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:23:40.394540       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:23:40.494617       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:23:40.494643       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:23:40.494649       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c1e4cc3b9a7f1259a1339b951fd30079b99dc7acedc895c7ae90814405daad16] <==
	I0919 22:40:20.575328       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:40:20.672061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:40:20.772951       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:40:20.773530       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:40:20.774779       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:40:20.837591       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:40:20.837664       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:40:20.853483       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:40:20.853910       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:40:20.853934       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:40:20.859319       1 config.go:309] "Starting node config controller"
	I0919 22:40:20.859436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:40:20.859447       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:40:20.859941       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:40:20.859974       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:40:20.860439       1 config.go:200] "Starting service config controller"
	I0919 22:40:20.860604       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:40:20.861833       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:40:20.862286       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:40:20.960109       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:40:20.960793       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:40:20.962617       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [456a0c3cbf5ce028b9cbac658728c1fee13ad8e2659bfa0c625cd685d711c708] <==
	E0919 22:23:33.115040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:23:33.129278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:23:33.194774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:23:33.337699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0919 22:23:35.055089       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:24:08.346116       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:08.346301       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mk6pv\": pod kindnet-mk6pv is already assigned to node \"ha-326307-m02\"" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:08.365410       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-78xs2" node="ha-326307-m02"
	E0919 22:24:08.365600       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-78xs2\": pod kindnet-78xs2 is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="kube-system/kindnet-78xs2"
	E0919 22:24:08.379248       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"kindnet-78xs2\" not found" pod="kube-system/kindnet-78xs2"
	E0919 22:24:10.002296       1 schedule_one.go:975] "Scheduler cache AssumePod failed" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" pod="kube-system/kindnet-mk6pv"
	E0919 22:24:10.002334       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="pod 71a20992-8279-4040-9edc-bedef6e7b570(kube-system/kindnet-mk6pv) is in the cache, so can't be assumed" logger="UnhandledError" pod="kube-system/kindnet-mk6pv"
	I0919 22:24:10.002368       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mk6pv" node="ha-326307-m02"
	E0919 22:24:31.751287       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-pnj9r" node="ha-326307-m03"
	E0919 22:24:31.751375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pnj9r\": pod kindnet-pnj9r is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-pnj9r"
	E0919 22:24:31.887089       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:31.887576       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 173e48ec-ef56-4824-9f55-a04b199b7943(kube-system/kindnet-qxwpq) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	E0919 22:24:31.887605       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qxwpq\": pod kindnet-qxwpq is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-qxwpq"
	I0919 22:24:31.888969       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qxwpq" node="ha-326307-m03"
	E0919 22:24:35.828083       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-nzzlb" node="ha-326307-m03"
	E0919 22:24:35.828187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nzzlb\": pod kindnet-nzzlb is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-nzzlb"
	E0919 22:24:35.839864       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	E0919 22:24:35.839940       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ba4fd407-2e93-4324-ab2d-4f192d79fdf5(kube-system/kindnet-dmxl8) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	E0919 22:24:35.839964       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dmxl8\": pod kindnet-dmxl8 is already assigned to node \"ha-326307-m03\"" logger="UnhandledError" pod="kube-system/kindnet-dmxl8"
	I0919 22:24:35.841757       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dmxl8" node="ha-326307-m03"
	
	
	==> kube-scheduler [63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284] <==
	I0919 22:40:14.121705       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:40:19.175600       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 22:40:19.175869       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 22:40:19.175952       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:40:19.175968       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:40:19.217556       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:40:19.217674       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:40:19.220816       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:40:19.221038       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:40:19.226224       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:40:19.226332       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:40:19.321477       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.402545     619 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.403468     619 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:40:19 ha-326307 kubelet[619]: E0919 22:40:19.407687     619 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-326307\" already exists" pod="kube-system/kube-apiserver-ha-326307"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.701084     619 apiserver.go:52] "Watching apiserver"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.707631     619 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-326307" podUID="36baecf0-60bd-41c0-a3c8-45e4f6ebddad"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.728881     619 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-326307"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.728907     619 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-326307"
	Sep 19 22:40:19 ha-326307 kubelet[619]: E0919 22:40:19.731920     619 status_manager.go:1041] "Failed to update status for pod" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36baecf0-60bd-41c0-a3c8-45e4f6ebddad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-19T22:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-19T22:40:12Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-19T22:40:13Z\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-19T22:40:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-19T22:40:12Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\
\\"containerd://83bc1a5b44143d5315dbb67a4a3170035470350d5fd1fa6d599a962fd33614ad\\\",\\\"image\\\":\\\"ghcr.io/kube-vip/kube-vip:v1.0.0\\\",\\\"imageID\\\":\\\"ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-vip\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-19T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/admin.conf\\\",\\\"name\\\":\\\"kubeconfig\\\"}]}],\\\"startTime\\\":\\\"2025-09-19T22:40:12Z\\\"}}\" for pod \"kube-system\"/\"kube-vip-ha-326307\": pods \"kube-vip-ha-326307\" not found" pod="kube-system/kube-vip-ha-326307"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.801129     619 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813377     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cafe04c6-2dce-4b93-b6d1-205efc39b360-tmp\") pod \"storage-provisioner\" (UID: \"cafe04c6-2dce-4b93-b6d1-205efc39b360\") " pod="kube-system/storage-provisioner"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813554     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-xtables-lock\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813666     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70be5fcc-7ab6-4eb1-870d-988fee1a01bb-xtables-lock\") pod \"kube-proxy-8kxtv\" (UID: \"70be5fcc-7ab6-4eb1-870d-988fee1a01bb\") " pod="kube-system/kube-proxy-8kxtv"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813815     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-lib-modules\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813849     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70be5fcc-7ab6-4eb1-870d-988fee1a01bb-lib-modules\") pod \"kube-proxy-8kxtv\" (UID: \"70be5fcc-7ab6-4eb1-870d-988fee1a01bb\") " pod="kube-system/kube-proxy-8kxtv"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.813876     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-cni-cfg\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:40:19 ha-326307 kubelet[619]: I0919 22:40:19.823375     619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-326307" podStartSLOduration=0.823354362 podStartE2EDuration="823.354362ms" podCreationTimestamp="2025-09-19 22:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:40:19.822728814 +0000 UTC m=+7.186818639" watchObservedRunningTime="2025-09-19 22:40:19.823354362 +0000 UTC m=+7.187444186"
	Sep 19 22:40:20 ha-326307 kubelet[619]: I0919 22:40:20.739430     619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fb2219973c6b37a95b47a05e51f4922" path="/var/lib/kubelet/pods/5fb2219973c6b37a95b47a05e51f4922/volumes"
	Sep 19 22:40:21 ha-326307 kubelet[619]: I0919 22:40:21.854071     619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 19 22:40:26 ha-326307 kubelet[619]: I0919 22:40:26.469144     619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 19 22:40:27 ha-326307 kubelet[619]: I0919 22:40:27.660037     619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 19 22:40:50 ha-326307 kubelet[619]: I0919 22:40:50.939471     619 scope.go:117] "RemoveContainer" containerID="f52d2d9f5881b5d50f95a3aeef2c876d51dd6b2be6c464a7464b26e8175b0fc6"
	Sep 19 22:40:50 ha-326307 kubelet[619]: I0919 22:40:50.939831     619 scope.go:117] "RemoveContainer" containerID="a7d6081c4523a1615c9325b1139e2303619e28b6fc78896684594ac51dc7c0d2"
	Sep 19 22:40:50 ha-326307 kubelet[619]: E0919 22:40:50.940028     619 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cafe04c6-2dce-4b93-b6d1-205efc39b360)\"" pod="kube-system/storage-provisioner" podUID="cafe04c6-2dce-4b93-b6d1-205efc39b360"
	Sep 19 22:41:02 ha-326307 kubelet[619]: I0919 22:41:02.729182     619 scope.go:117] "RemoveContainer" containerID="a7d6081c4523a1615c9325b1139e2303619e28b6fc78896684594ac51dc7c0d2"
	Sep 19 22:41:12 ha-326307 kubelet[619]: I0919 22:41:12.720023     619 scope.go:117] "RemoveContainer" containerID="d1c181558b58c3ebc7dae5df97b80119a8c6d18dc19b0532b61999c4db0c6668"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-326307 -n ha-326307
helpers_test.go:269: (dbg) Run:  kubectl --context ha-326307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-n7chr
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-326307 describe pod busybox-7b57f96db7-n7chr
helpers_test.go:290: (dbg) kubectl --context ha-326307 describe pod busybox-7b57f96db7-n7chr:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-n7chr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fzr8g (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fzr8g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  12s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  12s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  12s (x2 over 13s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (353.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0919 22:47:25.078416   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:52:11.700792   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:52:25.078656   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: signal: killed (5m51.140052876s)

                                                
                                                
-- stdout --
	* [ha-326307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-326307" primary control-plane node in "ha-326307" cluster
	* Pulling base image v0.0.48 ...
	* Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	* Enabled addons: 
	
	* Starting "ha-326307-m02" control-plane node in "ha-326307" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-326307-m04" worker node in "ha-326307" cluster
	* Pulling base image v0.0.48 ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:47:22.393632  117334 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:47:22.393921  117334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:47:22.393933  117334 out.go:374] Setting ErrFile to fd 2...
	I0919 22:47:22.393938  117334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:47:22.394221  117334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:47:22.394719  117334 out.go:368] Setting JSON to false
	I0919 22:47:22.395662  117334 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5386,"bootTime":1758316656,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:47:22.395761  117334 start.go:140] virtualization: kvm guest
	I0919 22:47:22.398317  117334 out.go:179] * [ha-326307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:47:22.399944  117334 notify.go:220] Checking for updates...
	I0919 22:47:22.399959  117334 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:47:22.401898  117334 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:47:22.403996  117334 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:47:22.405830  117334 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 22:47:22.407112  117334 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:47:22.408951  117334 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:47:22.410843  117334 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:22.411324  117334 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:47:22.437572  117334 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:47:22.437728  117334 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:47:22.496139  117334 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:47:22.484010804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:47:22.496324  117334 docker.go:318] overlay module found
	I0919 22:47:22.498896  117334 out.go:179] * Using the docker driver based on existing profile
	I0919 22:47:22.500625  117334 start.go:304] selected driver: docker
	I0919 22:47:22.500654  117334 start.go:918] validating driver "docker" against &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false
kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:47:22.500818  117334 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:47:22.500921  117334 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:47:22.561642  117334 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:47:22.55133994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:47:22.562343  117334 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:47:22.562375  117334 cni.go:84] Creating CNI manager for ""
	I0919 22:47:22.562446  117334 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 22:47:22.562497  117334 start.go:348] cluster config:
	{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I0919 22:47:22.564745  117334 out.go:179] * Starting "ha-326307" primary control-plane node in "ha-326307" cluster
	I0919 22:47:22.566242  117334 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:47:22.567765  117334 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:47:22.569680  117334 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:47:22.569714  117334 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:47:22.569742  117334 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 22:47:22.569758  117334 cache.go:58] Caching tarball of preloaded images
	I0919 22:47:22.569871  117334 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:47:22.569882  117334 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:47:22.570000  117334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:47:22.592300  117334 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:47:22.592321  117334 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:47:22.592343  117334 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:47:22.592366  117334 start.go:360] acquireMachinesLock for ha-326307: {Name:mk42b79b90944aab63c8b37c2f94e04ca1ebec1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:47:22.592454  117334 start.go:364] duration metric: took 68.019µs to acquireMachinesLock for "ha-326307"
	I0919 22:47:22.592478  117334 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:47:22.592483  117334 fix.go:54] fixHost starting: 
	I0919 22:47:22.592723  117334 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:47:22.611665  117334 fix.go:112] recreateIfNeeded on ha-326307: state=Stopped err=<nil>
	W0919 22:47:22.611692  117334 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:47:22.613846  117334 out.go:252] * Restarting existing docker container for "ha-326307" ...
	I0919 22:47:22.613926  117334 cli_runner.go:164] Run: docker start ha-326307
	I0919 22:47:22.875579  117334 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:47:22.895349  117334 kic.go:430] container "ha-326307" state is running.
	I0919 22:47:22.895752  117334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:47:22.915818  117334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:47:22.916071  117334 machine.go:93] provisionDockerMachine start ...
	I0919 22:47:22.916129  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:22.937975  117334 main.go:141] libmachine: Using SSH client type: native
	I0919 22:47:22.938274  117334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32844 <nil> <nil>}
	I0919 22:47:22.938293  117334 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:47:22.938928  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36440->127.0.0.1:32844: read: connection reset by peer
	I0919 22:47:26.080350  117334 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:47:26.080445  117334 ubuntu.go:182] provisioning hostname "ha-326307"
	I0919 22:47:26.080532  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:26.100397  117334 main.go:141] libmachine: Using SSH client type: native
	I0919 22:47:26.100707  117334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32844 <nil> <nil>}
	I0919 22:47:26.100724  117334 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307 && echo "ha-326307" | sudo tee /etc/hostname
	I0919 22:47:26.252874  117334 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:47:26.252970  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:26.272439  117334 main.go:141] libmachine: Using SSH client type: native
	I0919 22:47:26.272712  117334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32844 <nil> <nil>}
	I0919 22:47:26.272732  117334 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:47:26.413385  117334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:47:26.413420  117334 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:47:26.413463  117334 ubuntu.go:190] setting up certificates
	I0919 22:47:26.413474  117334 provision.go:84] configureAuth start
	I0919 22:47:26.413529  117334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:47:26.434202  117334 provision.go:143] copyHostCerts
	I0919 22:47:26.434243  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:47:26.434284  117334 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:47:26.434299  117334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:47:26.434378  117334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:47:26.434493  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:47:26.434519  117334 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:47:26.434534  117334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:47:26.434569  117334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:47:26.434680  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:47:26.434709  117334 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:47:26.434716  117334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:47:26.434744  117334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:47:26.434809  117334 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307 san=[127.0.0.1 192.168.49.2 ha-326307 localhost minikube]
	I0919 22:47:26.537252  117334 provision.go:177] copyRemoteCerts
	I0919 22:47:26.537320  117334 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:47:26.537353  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:26.556867  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32844 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:47:26.655613  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:47:26.655675  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:47:26.683834  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:47:26.683899  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 22:47:26.710194  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:47:26.710259  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:47:26.736950  117334 provision.go:87] duration metric: took 323.462377ms to configureAuth
	I0919 22:47:26.736983  117334 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:47:26.737245  117334 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:26.737259  117334 machine.go:96] duration metric: took 3.821173876s to provisionDockerMachine
	I0919 22:47:26.737266  117334 start.go:293] postStartSetup for "ha-326307" (driver="docker")
	I0919 22:47:26.737277  117334 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:47:26.737316  117334 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:47:26.737349  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:26.756463  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32844 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:47:26.856735  117334 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:47:26.861231  117334 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:47:26.861308  117334 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:47:26.861330  117334 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:47:26.861339  117334 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:47:26.861357  117334 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:47:26.861422  117334 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:47:26.861501  117334 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:47:26.861507  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:47:26.861601  117334 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:47:26.872018  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:47:26.900025  117334 start.go:296] duration metric: took 162.740511ms for postStartSetup
	I0919 22:47:26.900125  117334 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:47:26.900202  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:26.919327  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32844 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:47:27.014623  117334 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:47:27.019402  117334 fix.go:56] duration metric: took 4.426890073s for fixHost
	I0919 22:47:27.019432  117334 start.go:83] releasing machines lock for "ha-326307", held for 4.42696261s
	I0919 22:47:27.019501  117334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:47:27.039853  117334 ssh_runner.go:195] Run: cat /version.json
	I0919 22:47:27.039910  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:27.039957  117334 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:47:27.040034  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:27.063354  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32844 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:47:27.063685  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32844 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:47:27.159565  117334 ssh_runner.go:195] Run: systemctl --version
	I0919 22:47:27.242250  117334 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:47:27.248258  117334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:47:27.271610  117334 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:47:27.271703  117334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:47:27.285297  117334 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:47:27.285327  117334 start.go:495] detecting cgroup driver to use...
	I0919 22:47:27.285358  117334 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:47:27.285529  117334 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:47:27.302886  117334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:47:27.317247  117334 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:47:27.317299  117334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:47:27.332592  117334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:47:27.345247  117334 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:47:27.413652  117334 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:47:27.478751  117334 docker.go:234] disabling docker service ...
	I0919 22:47:27.478821  117334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:47:27.492359  117334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:47:27.505121  117334 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:47:27.571115  117334 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:47:27.637412  117334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:47:27.650858  117334 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:47:27.670199  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:47:27.681859  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:47:27.693661  117334 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:47:27.693746  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:47:27.705304  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:47:27.716745  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:47:27.728889  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:47:27.740462  117334 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:47:27.750962  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:47:27.762172  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:47:27.773910  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:47:27.785724  117334 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:47:27.795637  117334 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:47:27.805881  117334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:47:27.871569  117334 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:47:28.003543  117334 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:47:28.003622  117334 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:47:28.008094  117334 start.go:563] Will wait 60s for crictl version
	I0919 22:47:28.008181  117334 ssh_runner.go:195] Run: which crictl
	I0919 22:47:28.012000  117334 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:47:28.047417  117334 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:47:28.047490  117334 ssh_runner.go:195] Run: containerd --version
	I0919 22:47:28.075836  117334 ssh_runner.go:195] Run: containerd --version
	I0919 22:47:28.105440  117334 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:47:28.107140  117334 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:47:28.125639  117334 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:47:28.129843  117334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:47:28.142430  117334 kubeadm.go:875] updating cluster {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socke
tVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:47:28.142571  117334 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:47:28.142617  117334 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:47:28.178090  117334 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:47:28.178110  117334 containerd.go:534] Images already preloaded, skipping extraction
	I0919 22:47:28.178173  117334 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:47:28.213217  117334 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:47:28.213237  117334 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:47:28.213244  117334 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0919 22:47:28.213345  117334 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:47:28.213394  117334 ssh_runner.go:195] Run: sudo crictl info
	I0919 22:47:28.248095  117334 cni.go:84] Creating CNI manager for ""
	I0919 22:47:28.248113  117334 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 22:47:28.248121  117334 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:47:28.248141  117334 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-326307 NodeName:ha-326307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:47:28.248259  117334 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-326307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:47:28.248277  117334 kube-vip.go:115] generating kube-vip config ...
	I0919 22:47:28.248312  117334 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:47:28.261292  117334 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:47:28.261387  117334 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:47:28.261450  117334 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:47:28.270966  117334 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:47:28.271022  117334 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:47:28.280464  117334 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 22:47:28.299487  117334 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:47:28.318443  117334 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0919 22:47:28.337895  117334 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:47:28.357421  117334 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:47:28.361103  117334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:47:28.373457  117334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:47:28.437086  117334 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:47:28.466071  117334 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.2
	I0919 22:47:28.466093  117334 certs.go:194] generating shared ca certs ...
	I0919 22:47:28.466112  117334 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:47:28.466289  117334 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:47:28.466330  117334 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:47:28.466341  117334 certs.go:256] generating profile certs ...
	I0919 22:47:28.466443  117334 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:47:28.466473  117334 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.fd4d23c7
	I0919 22:47:28.466487  117334 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.fd4d23c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:47:28.652830  117334 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.fd4d23c7 ...
	I0919 22:47:28.652864  117334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.fd4d23c7: {Name:mkb11519fb8d768bcdcc882a07bc4392b845764e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:47:28.653069  117334 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.fd4d23c7 ...
	I0919 22:47:28.653093  117334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.fd4d23c7: {Name:mkbf9a9f521b248ec2b23cdf5c175cff4ab045bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:47:28.653248  117334 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.fd4d23c7 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:47:28.653439  117334 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.fd4d23c7 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:47:28.653628  117334 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:47:28.653647  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:47:28.653665  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:47:28.653684  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:47:28.653702  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:47:28.653718  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:47:28.653735  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:47:28.653757  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:47:28.653776  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:47:28.653837  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:47:28.653876  117334 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:47:28.653891  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:47:28.653922  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:47:28.653953  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:47:28.654046  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:47:28.654109  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:47:28.654183  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:47:28.654206  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:47:28.654224  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:47:28.654947  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:47:28.687144  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:47:28.717147  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:47:28.746276  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:47:28.773744  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 22:47:28.802690  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:47:28.830097  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:47:28.857601  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:47:28.884331  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:47:28.911171  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:47:28.939294  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:47:28.967356  117334 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:47:28.988237  117334 ssh_runner.go:195] Run: openssl version
	I0919 22:47:28.994799  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:47:29.009817  117334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:47:29.016062  117334 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:47:29.016129  117334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:47:29.027090  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:47:29.041077  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:47:29.056239  117334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:47:29.062303  117334 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:47:29.062372  117334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:47:29.072410  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:47:29.086885  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:47:29.104789  117334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:47:29.110276  117334 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:47:29.110343  117334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:47:29.121706  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:47:29.134943  117334 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:47:29.142169  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:47:29.154102  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:47:29.164446  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:47:29.173875  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:47:29.185007  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:47:29.192558  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:47:29.205365  117334 kubeadm.go:392] StartCluster: {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:47:29.205502  117334 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 22:47:29.205578  117334 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:47:29.264798  117334 cri.go:89] found id: "d2382f2921e1e078e07b4c391b7a15c8be51e2d9202cab3eff086a9d2dc79f18"
	I0919 22:47:29.264825  117334 cri.go:89] found id: "3a0f88ac96e637fd1a25a3a9d355ee87788a1c8050a80984c4abad203a1e14cd"
	I0919 22:47:29.264831  117334 cri.go:89] found id: "1373762f48215f505ded621a9917f9e9deda40b2815f0cbda59d8457cd2e760e"
	I0919 22:47:29.264836  117334 cri.go:89] found id: "d64ccf0dc7b40cde117eacc30ca9bbbf7a6b3a0ee53c1d14cce1120da436c90c"
	I0919 22:47:29.264840  117334 cri.go:89] found id: "2368bad9e0ff41be487b38fe2174f4d5d39df2d577f9274cb01bd1725b563fbb"
	I0919 22:47:29.264844  117334 cri.go:89] found id: "b1e652a991900c99afd1da5b6ebb61c2bdc03afb0dc96c44daf18ff7674dd987"
	I0919 22:47:29.264849  117334 cri.go:89] found id: "bb7d0d80b9c2303a244cfffc060085bd15528b57546277cdaaaf2dd707b68f1f"
	I0919 22:47:29.264852  117334 cri.go:89] found id: "fea1c0534d95d8681a40f476ef920c8ced5eb8897a63d871e66830a2e35509fc"
	I0919 22:47:29.264856  117334 cri.go:89] found id: "fff949799c16ffb392a665b0e5af2f326948a468e2495b8ea2fa176e06b5cfbf"
	I0919 22:47:29.264865  117334 cri.go:89] found id: "9b01ee2966e081085b732d62e68985fd9249574188499e7e99fa53ff3e585c2d"
	I0919 22:47:29.264868  117334 cri.go:89] found id: "c1e4cc3b9a7f1259a1339b951fd30079b99dc7acedc895c7ae90814405daad16"
	I0919 22:47:29.264872  117334 cri.go:89] found id: "63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284"
	I0919 22:47:29.264876  117334 cri.go:89] found id: "7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c"
	I0919 22:47:29.264880  117334 cri.go:89] found id: "c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6"
	I0919 22:47:29.264883  117334 cri.go:89] found id: "e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5"
	I0919 22:47:29.264898  117334 cri.go:89] found id: ""
	I0919 22:47:29.264945  117334 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0919 22:47:29.298635  117334 cri.go:116] JSON = [{"ociVersion":"1.2.0","id":"1373762f48215f505ded621a9917f9e9deda40b2815f0cbda59d8457cd2e760e","pid":1226,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1373762f48215f505ded621a9917f9e9deda40b2815f0cbda59d8457cd2e760e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1373762f48215f505ded621a9917f9e9deda40b2815f0cbda59d8457cd2e760e/rootfs","created":"2025-09-19T22:47:29.253988391Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri.sandbox-id":"f294a22440ffca75fd2e9f6848a1160394ef6c37e08deda35e634212572736bb","io.kubernetes.cri.sandbox-name":"kube-scheduler-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"02be84f36b44ed11e0db130395870414"},"owner":"root"},{"ociVersion":"1.2.0","id":"1b70ed4ca2d4c10e6e16e99b0
60e92540e12c577ce382140a6b3a103bfd24379","pid":1046,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b70ed4ca2d4c10e6e16e99b060e92540e12c577ce382140a6b3a103bfd24379","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b70ed4ca2d4c10e6e16e99b060e92540e12c577ce382140a6b3a103bfd24379/rootfs","created":"2025-09-19T22:47:29.114522233Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"1b70ed4ca2d4c10e6e16e99b060e92540e12c577ce382140a6b3a103bfd24379","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-ha-326307_57c850ed4c5abebc96f109c9dc04f98c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"57c850ed4c5abebc96f109
c9dc04f98c"},"owner":"root"},{"ociVersion":"1.2.0","id":"2368bad9e0ff41be487b38fe2174f4d5d39df2d577f9274cb01bd1725b563fbb","pid":1176,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2368bad9e0ff41be487b38fe2174f4d5d39df2d577f9274cb01bd1725b563fbb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2368bad9e0ff41be487b38fe2174f4d5d39df2d577f9274cb01bd1725b563fbb/rootfs","created":"2025-09-19T22:47:29.22978844Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri.sandbox-id":"b8b1b6232caadfb96c901dc4b98663802bb63f8c256f83649d7e19a13bd21eda","io.kubernetes.cri.sandbox-name":"kube-apiserver-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f6c96a149704fe94a8f3f9671ba1a8ff"},"owner":"root"},{"ociVersion":"1.2.0","id":"3a0f88ac96e637fd1a25a3a9d355ee87788a1c8050a80984c4abad20
3a1e14cd","pid":1247,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a0f88ac96e637fd1a25a3a9d355ee87788a1c8050a80984c4abad203a1e14cd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a0f88ac96e637fd1a25a3a9d355ee87788a1c8050a80984c4abad203a1e14cd/rootfs","created":"2025-09-19T22:47:29.283061875Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri.sandbox-id":"1b70ed4ca2d4c10e6e16e99b060e92540e12c577ce382140a6b3a103bfd24379","io.kubernetes.cri.sandbox-name":"kube-controller-manager-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"57c850ed4c5abebc96f109c9dc04f98c"},"owner":"root"},{"ociVersion":"1.2.0","id":"8b072a1ef0aefde029cac171fe575d2b9617cf6a1ddd78d416c920d48e41c704","pid":1054,"status":"running","bundle":"/run/containerd/io.containerd.runti
me.v2.task/k8s.io/8b072a1ef0aefde029cac171fe575d2b9617cf6a1ddd78d416c920d48e41c704","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b072a1ef0aefde029cac171fe575d2b9617cf6a1ddd78d416c920d48e41c704/rootfs","created":"2025-09-19T22:47:29.115043342Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"8b072a1ef0aefde029cac171fe575d2b9617cf6a1ddd78d416c920d48e41c704","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-vip-ha-326307_11fc7e0ddcb5f54efe3aa73e9d205abc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-vip-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"11fc7e0ddcb5f54efe3aa73e9d205abc"},"owner":"root"},{"ociVersion":"1.2.0","id":"a02766bee21204caf69ae959a43e2c1ba579b3ddfcc1698e8bf28a0c454da099","pid":998,"status":"runni
ng","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a02766bee21204caf69ae959a43e2c1ba579b3ddfcc1698e8bf28a0c454da099","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a02766bee21204caf69ae959a43e2c1ba579b3ddfcc1698e8bf28a0c454da099/rootfs","created":"2025-09-19T22:47:29.079313993Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a02766bee21204caf69ae959a43e2c1ba579b3ddfcc1698e8bf28a0c454da099","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-ha-326307_044bbdcbe96821df073716c7f05fb17d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"044bbdcbe96821df073716c7f05fb17d"},"owner":"root"},{"ociVersion":"1.2.0","id":"b8b1b6232caadfb96c901dc4b98663802bb63f8c256f8364
9d7e19a13bd21eda","pid":1007,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8b1b6232caadfb96c901dc4b98663802bb63f8c256f83649d7e19a13bd21eda","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8b1b6232caadfb96c901dc4b98663802bb63f8c256f83649d7e19a13bd21eda/rootfs","created":"2025-09-19T22:47:29.081142893Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"b8b1b6232caadfb96c901dc4b98663802bb63f8c256f83649d7e19a13bd21eda","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-ha-326307_f6c96a149704fe94a8f3f9671ba1a8ff","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f6c96a149704fe94a8f3f9671ba1a8ff"},"owner":"root"},{"ociVersion
":"1.2.0","id":"d2382f2921e1e078e07b4c391b7a15c8be51e2d9202cab3eff086a9d2dc79f18","pid":1240,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2382f2921e1e078e07b4c391b7a15c8be51e2d9202cab3eff086a9d2dc79f18","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2382f2921e1e078e07b4c391b7a15c8be51e2d9202cab3eff086a9d2dc79f18/rootfs","created":"2025-09-19T22:47:29.265888027Z","annotations":{"io.kubernetes.cri.container-name":"kube-vip","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri.sandbox-id":"8b072a1ef0aefde029cac171fe575d2b9617cf6a1ddd78d416c920d48e41c704","io.kubernetes.cri.sandbox-name":"kube-vip-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"11fc7e0ddcb5f54efe3aa73e9d205abc"},"owner":"root"},{"ociVersion":"1.2.0","id":"d64ccf0dc7b40cde117eacc30ca9bbbf7a6b3a0ee53c1d14cce1120da436c90c","pid":1165,"status":"running","bundle":"/run/con
tainerd/io.containerd.runtime.v2.task/k8s.io/d64ccf0dc7b40cde117eacc30ca9bbbf7a6b3a0ee53c1d14cce1120da436c90c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d64ccf0dc7b40cde117eacc30ca9bbbf7a6b3a0ee53c1d14cce1120da436c90c/rootfs","created":"2025-09-19T22:47:29.231195148Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"a02766bee21204caf69ae959a43e2c1ba579b3ddfcc1698e8bf28a0c454da099","io.kubernetes.cri.sandbox-name":"etcd-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"044bbdcbe96821df073716c7f05fb17d"},"owner":"root"},{"ociVersion":"1.2.0","id":"f294a22440ffca75fd2e9f6848a1160394ef6c37e08deda35e634212572736bb","pid":1034,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f294a22440ffca75fd2e9f6848a1160394ef6c37e08deda35e634212572736bb","rootfs":"/run/containerd/io.co
ntainerd.runtime.v2.task/k8s.io/f294a22440ffca75fd2e9f6848a1160394ef6c37e08deda35e634212572736bb/rootfs","created":"2025-09-19T22:47:29.105362182Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f294a22440ffca75fd2e9f6848a1160394ef6c37e08deda35e634212572736bb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-ha-326307_02be84f36b44ed11e0db130395870414","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"02be84f36b44ed11e0db130395870414"},"owner":"root"}]
	I0919 22:47:29.298835  117334 cri.go:126] list returned 10 containers
	I0919 22:47:29.298850  117334 cri.go:129] container: {ID:1373762f48215f505ded621a9917f9e9deda40b2815f0cbda59d8457cd2e760e Status:created}
	I0919 22:47:29.298890  117334 cri.go:135] skipping {1373762f48215f505ded621a9917f9e9deda40b2815f0cbda59d8457cd2e760e created}: state = "created", want "paused"
	I0919 22:47:29.298909  117334 cri.go:129] container: {ID:1b70ed4ca2d4c10e6e16e99b060e92540e12c577ce382140a6b3a103bfd24379 Status:running}
	I0919 22:47:29.298925  117334 cri.go:131] skipping 1b70ed4ca2d4c10e6e16e99b060e92540e12c577ce382140a6b3a103bfd24379 - not in ps
	I0919 22:47:29.298937  117334 cri.go:129] container: {ID:2368bad9e0ff41be487b38fe2174f4d5d39df2d577f9274cb01bd1725b563fbb Status:running}
	I0919 22:47:29.298946  117334 cri.go:135] skipping {2368bad9e0ff41be487b38fe2174f4d5d39df2d577f9274cb01bd1725b563fbb running}: state = "running", want "paused"
	I0919 22:47:29.298952  117334 cri.go:129] container: {ID:3a0f88ac96e637fd1a25a3a9d355ee87788a1c8050a80984c4abad203a1e14cd Status:created}
	I0919 22:47:29.298962  117334 cri.go:135] skipping {3a0f88ac96e637fd1a25a3a9d355ee87788a1c8050a80984c4abad203a1e14cd created}: state = "created", want "paused"
	I0919 22:47:29.298968  117334 cri.go:129] container: {ID:8b072a1ef0aefde029cac171fe575d2b9617cf6a1ddd78d416c920d48e41c704 Status:running}
	I0919 22:47:29.298975  117334 cri.go:131] skipping 8b072a1ef0aefde029cac171fe575d2b9617cf6a1ddd78d416c920d48e41c704 - not in ps
	I0919 22:47:29.298980  117334 cri.go:129] container: {ID:a02766bee21204caf69ae959a43e2c1ba579b3ddfcc1698e8bf28a0c454da099 Status:running}
	I0919 22:47:29.298984  117334 cri.go:131] skipping a02766bee21204caf69ae959a43e2c1ba579b3ddfcc1698e8bf28a0c454da099 - not in ps
	I0919 22:47:29.298989  117334 cri.go:129] container: {ID:b8b1b6232caadfb96c901dc4b98663802bb63f8c256f83649d7e19a13bd21eda Status:running}
	I0919 22:47:29.298995  117334 cri.go:131] skipping b8b1b6232caadfb96c901dc4b98663802bb63f8c256f83649d7e19a13bd21eda - not in ps
	I0919 22:47:29.299001  117334 cri.go:129] container: {ID:d2382f2921e1e078e07b4c391b7a15c8be51e2d9202cab3eff086a9d2dc79f18 Status:running}
	I0919 22:47:29.299015  117334 cri.go:135] skipping {d2382f2921e1e078e07b4c391b7a15c8be51e2d9202cab3eff086a9d2dc79f18 running}: state = "running", want "paused"
	I0919 22:47:29.299028  117334 cri.go:129] container: {ID:d64ccf0dc7b40cde117eacc30ca9bbbf7a6b3a0ee53c1d14cce1120da436c90c Status:running}
	I0919 22:47:29.299033  117334 cri.go:135] skipping {d64ccf0dc7b40cde117eacc30ca9bbbf7a6b3a0ee53c1d14cce1120da436c90c running}: state = "running", want "paused"
	I0919 22:47:29.299047  117334 cri.go:129] container: {ID:f294a22440ffca75fd2e9f6848a1160394ef6c37e08deda35e634212572736bb Status:running}
	I0919 22:47:29.299054  117334 cri.go:131] skipping f294a22440ffca75fd2e9f6848a1160394ef6c37e08deda35e634212572736bb - not in ps
	I0919 22:47:29.299103  117334 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:47:29.313552  117334 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:47:29.313573  117334 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:47:29.313711  117334 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:47:29.328032  117334 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:47:29.328544  117334 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-326307" does not appear in /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:47:29.328687  117334 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14678/kubeconfig needs updating (will repair): [kubeconfig missing "ha-326307" cluster setting kubeconfig missing "ha-326307" context setting]
	I0919 22:47:29.329054  117334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:47:29.330465  117334 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:47:29.331017  117334 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:47:29.331119  117334 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:47:29.331255  117334 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:47:29.331294  117334 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:47:29.331318  117334 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:47:29.331333  117334 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:47:29.331799  117334 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:47:29.346171  117334 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:47:29.346200  117334 kubeadm.go:593] duration metric: took 32.620051ms to restartPrimaryControlPlane
	I0919 22:47:29.346212  117334 kubeadm.go:394] duration metric: took 140.858312ms to StartCluster
	I0919 22:47:29.346233  117334 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:47:29.346317  117334 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:47:29.346994  117334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:47:29.347231  117334 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:47:29.347255  117334 start.go:241] waiting for startup goroutines ...
	I0919 22:47:29.347272  117334 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:47:29.347482  117334 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:29.351116  117334 out.go:179] * Enabled addons: 
	I0919 22:47:29.353346  117334 addons.go:514] duration metric: took 6.07167ms for enable addons: enabled=[]
	I0919 22:47:29.353405  117334 start.go:246] waiting for cluster config update ...
	I0919 22:47:29.353417  117334 start.go:255] writing updated cluster config ...
	I0919 22:47:29.355660  117334 out.go:203] 
	I0919 22:47:29.359640  117334 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:29.359776  117334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:47:29.367412  117334 out.go:179] * Starting "ha-326307-m02" control-plane node in "ha-326307" cluster
	I0919 22:47:29.369430  117334 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:47:29.370763  117334 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:47:29.371887  117334 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:47:29.371912  117334 cache.go:58] Caching tarball of preloaded images
	I0919 22:47:29.371963  117334 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:47:29.372017  117334 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:47:29.372033  117334 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:47:29.372127  117334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:47:29.400033  117334 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:47:29.400059  117334 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:47:29.400074  117334 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:47:29.400097  117334 start.go:360] acquireMachinesLock for ha-326307-m02: {Name:mk4919a9b19250804b0f53d01bcd11efaf9a431f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:47:29.400165  117334 start.go:364] duration metric: took 44.585µs to acquireMachinesLock for "ha-326307-m02"
	I0919 22:47:29.400188  117334 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:47:29.400196  117334 fix.go:54] fixHost starting: m02
	I0919 22:47:29.400407  117334 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:47:29.422810  117334 fix.go:112] recreateIfNeeded on ha-326307-m02: state=Stopped err=<nil>
	W0919 22:47:29.422843  117334 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:47:29.424776  117334 out.go:252] * Restarting existing docker container for "ha-326307-m02" ...
	I0919 22:47:29.424858  117334 cli_runner.go:164] Run: docker start ha-326307-m02
	I0919 22:47:29.713119  117334 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:47:29.736526  117334 kic.go:430] container "ha-326307-m02" state is running.
	I0919 22:47:29.737086  117334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:47:29.760996  117334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:47:29.761508  117334 machine.go:93] provisionDockerMachine start ...
	I0919 22:47:29.761592  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:47:29.786125  117334 main.go:141] libmachine: Using SSH client type: native
	I0919 22:47:29.786524  117334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32849 <nil> <nil>}
	I0919 22:47:29.786544  117334 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:47:29.787500  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33254->127.0.0.1:32849: read: connection reset by peer
	I0919 22:47:32.925586  117334 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:47:32.925612  117334 ubuntu.go:182] provisioning hostname "ha-326307-m02"
	I0919 22:47:32.925675  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:47:32.944726  117334 main.go:141] libmachine: Using SSH client type: native
	I0919 22:47:32.944992  117334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32849 <nil> <nil>}
	I0919 22:47:32.945010  117334 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m02 && echo "ha-326307-m02" | sudo tee /etc/hostname
	I0919 22:47:33.096423  117334 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:47:33.096496  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:47:33.114711  117334 main.go:141] libmachine: Using SSH client type: native
	I0919 22:47:33.114944  117334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32849 <nil> <nil>}
	I0919 22:47:33.114969  117334 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:47:33.253218  117334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:47:33.253247  117334 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:47:33.253270  117334 ubuntu.go:190] setting up certificates
	I0919 22:47:33.253285  117334 provision.go:84] configureAuth start
	I0919 22:47:33.253337  117334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:47:33.278232  117334 provision.go:143] copyHostCerts
	I0919 22:47:33.278270  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:47:33.278301  117334 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:47:33.278314  117334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:47:33.278394  117334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:47:33.278487  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:47:33.278510  117334 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:47:33.278520  117334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:47:33.278558  117334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:47:33.278615  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:47:33.278637  117334 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:47:33.278645  117334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:47:33.278683  117334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:47:33.278747  117334 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m02 san=[127.0.0.1 192.168.49.3 ha-326307-m02 localhost minikube]
	I0919 22:47:33.332031  117334 provision.go:177] copyRemoteCerts
	I0919 22:47:33.332083  117334 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:47:33.332131  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:47:33.354921  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:47:33.462587  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:47:33.462724  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:47:33.493993  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:47:33.494059  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:47:33.531622  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:47:33.531687  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:47:33.568741  117334 provision.go:87] duration metric: took 315.438937ms to configureAuth
	I0919 22:47:33.568793  117334 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:47:33.569097  117334 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:33.569112  117334 machine.go:96] duration metric: took 3.807571867s to provisionDockerMachine
	I0919 22:47:33.569121  117334 start.go:293] postStartSetup for "ha-326307-m02" (driver="docker")
	I0919 22:47:33.569133  117334 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:47:33.569229  117334 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:47:33.569284  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:47:33.595481  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:47:33.707066  117334 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:47:33.712405  117334 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:47:33.712461  117334 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:47:33.712475  117334 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:47:33.712488  117334 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:47:33.712501  117334 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:47:33.712564  117334 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:47:33.712671  117334 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:47:33.712686  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:47:33.712807  117334 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:47:33.725851  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:47:33.760705  117334 start.go:296] duration metric: took 191.567136ms for postStartSetup
	I0919 22:47:33.760799  117334 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:47:33.760847  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:47:33.786454  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:47:33.886966  117334 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:47:33.892336  117334 fix.go:56] duration metric: took 4.492132883s for fixHost
	I0919 22:47:33.892363  117334 start.go:83] releasing machines lock for "ha-326307-m02", held for 4.492187006s
	I0919 22:47:33.892445  117334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:47:33.918146  117334 out.go:179] * Found network options:
	I0919 22:47:33.919682  117334 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:47:33.920993  117334 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:47:33.921046  117334 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:47:33.921133  117334 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:47:33.921217  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:47:33.921236  117334 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:47:33.921296  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:47:33.942856  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:47:33.944992  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:47:34.044494  117334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:47:34.146655  117334 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:47:34.146752  117334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:47:34.157411  117334 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:47:34.157460  117334 start.go:495] detecting cgroup driver to use...
	I0919 22:47:34.157498  117334 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:47:34.157577  117334 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:47:34.173690  117334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:47:34.187634  117334 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:47:34.187699  117334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:47:34.216342  117334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:47:34.232606  117334 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:47:34.446768  117334 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:47:34.602403  117334 docker.go:234] disabling docker service ...
	I0919 22:47:34.602480  117334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:47:34.623629  117334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:47:34.643560  117334 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:47:34.782267  117334 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:47:34.931791  117334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:47:34.957097  117334 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:47:34.994360  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:47:35.017010  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:47:35.036523  117334 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:47:35.036620  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:47:35.066491  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:47:35.083515  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:47:35.103774  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:47:35.125189  117334 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:47:35.139400  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:47:35.159122  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:47:35.174784  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:47:35.189042  117334 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:47:35.203756  117334 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:47:35.218480  117334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:47:35.373466  117334 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:47:35.787529  117334 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:47:35.787609  117334 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:47:35.792630  117334 start.go:563] Will wait 60s for crictl version
	I0919 22:47:35.792696  117334 ssh_runner.go:195] Run: which crictl
	I0919 22:47:35.797656  117334 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:47:35.849084  117334 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:47:35.849174  117334 ssh_runner.go:195] Run: containerd --version
	I0919 22:47:35.880560  117334 ssh_runner.go:195] Run: containerd --version
	I0919 22:47:35.911964  117334 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:47:35.913181  117334 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:47:35.914141  117334 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:47:35.936843  117334 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:47:35.942788  117334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:47:35.959831  117334 mustload.go:65] Loading cluster: ha-326307
	I0919 22:47:35.960361  117334 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:35.960725  117334 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:47:35.983548  117334 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:47:35.983903  117334 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.3
	I0919 22:47:35.983923  117334 certs.go:194] generating shared ca certs ...
	I0919 22:47:35.983943  117334 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:47:35.984087  117334 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:47:35.984197  117334 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:47:35.984214  117334 certs.go:256] generating profile certs ...
	I0919 22:47:35.984321  117334 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:47:35.984407  117334 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4
	I0919 22:47:35.984452  117334 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:47:35.984465  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:47:35.984481  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:47:35.984502  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:47:35.984517  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:47:35.984529  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:47:35.984558  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:47:35.984580  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:47:35.984596  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:47:35.984682  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:47:35.984741  117334 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:47:35.984763  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:47:35.984810  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:47:35.984855  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:47:35.984890  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:47:35.984952  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:47:35.984998  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:47:35.985019  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:47:35.985040  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:47:35.985116  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:36.013559  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32844 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:47:36.108444  117334 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:47:36.114647  117334 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:47:36.134856  117334 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:47:36.139915  117334 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:47:36.157816  117334 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:47:36.162850  117334 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:47:36.182313  117334 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:47:36.186562  117334 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:47:36.205411  117334 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:47:36.210518  117334 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:47:36.228454  117334 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:47:36.233958  117334 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:47:36.250475  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:47:36.287936  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:47:36.318473  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:47:36.347137  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:47:36.377777  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 22:47:36.409829  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:47:36.443400  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:47:36.478291  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:47:36.515327  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:47:36.554063  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:47:36.590241  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:47:36.621753  117334 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:47:36.642425  117334 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:47:36.664996  117334 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:47:36.686785  117334 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:47:36.707098  117334 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:47:36.728558  117334 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:47:36.749312  117334 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:47:36.770481  117334 ssh_runner.go:195] Run: openssl version
	I0919 22:47:36.777442  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:47:36.789581  117334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:47:36.793657  117334 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:47:36.793719  117334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:47:36.801340  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:47:36.812179  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:47:36.824436  117334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:47:36.828381  117334 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:47:36.828455  117334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:47:36.835691  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:47:36.845784  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:47:36.856763  117334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:47:36.860912  117334 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:47:36.860989  117334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:47:36.869849  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:47:36.880884  117334 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:47:36.885178  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:47:36.892956  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:47:36.900721  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:47:36.908539  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:47:36.916037  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:47:36.924529  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:47:36.933540  117334 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0919 22:47:36.933663  117334 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:47:36.933702  117334 kube-vip.go:115] generating kube-vip config ...
	I0919 22:47:36.933754  117334 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:47:36.947749  117334 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:47:36.947810  117334 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:47:36.947865  117334 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:47:36.958049  117334 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:47:36.958122  117334 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:47:36.969748  117334 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:47:36.994919  117334 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:47:37.018759  117334 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:47:37.040341  117334 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:47:37.044968  117334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:47:37.058562  117334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:47:37.191250  117334 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:47:37.204699  117334 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:47:37.204946  117334 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:37.207697  117334 out.go:179] * Verifying Kubernetes components...
	I0919 22:47:37.208832  117334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:47:37.335597  117334 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:47:37.349256  117334 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:47:37.349320  117334 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:47:37.349539  117334 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m02" to be "Ready" ...
	I0919 22:47:37.358002  117334 node_ready.go:49] node "ha-326307-m02" is "Ready"
	I0919 22:47:37.358037  117334 node_ready.go:38] duration metric: took 8.469761ms for node "ha-326307-m02" to be "Ready" ...
	I0919 22:47:37.358053  117334 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:47:37.358113  117334 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:47:37.371449  117334 api_server.go:72] duration metric: took 166.706719ms to wait for apiserver process to appear ...
	I0919 22:47:37.371495  117334 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:47:37.371518  117334 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:47:37.381373  117334 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:47:37.382308  117334 api_server.go:141] control plane version: v1.34.0
	I0919 22:47:37.382336  117334 api_server.go:131] duration metric: took 10.833174ms to wait for apiserver health ...
	I0919 22:47:37.382347  117334 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:47:37.388868  117334 system_pods.go:59] 24 kube-system pods found
	I0919 22:47:37.388907  117334 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:47:37.388914  117334 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:47:37.388923  117334 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:47:37.388931  117334 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:47:37.388934  117334 system_pods.go:61] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running
	I0919 22:47:37.388938  117334 system_pods.go:61] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:47:37.388941  117334 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:47:37.388944  117334 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:47:37.388948  117334 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:47:37.388955  117334 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:47:37.388962  117334 system_pods.go:61] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running
	I0919 22:47:37.388968  117334 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:47:37.388978  117334 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:47:37.388981  117334 system_pods.go:61] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running
	I0919 22:47:37.388984  117334 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:47:37.388987  117334 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:47:37.388991  117334 system_pods.go:61] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:47:37.388994  117334 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:47:37.388998  117334 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:47:37.389001  117334 system_pods.go:61] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running
	I0919 22:47:37.389004  117334 system_pods.go:61] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:47:37.389006  117334 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:47:37.389008  117334 system_pods.go:61] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:47:37.389011  117334 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:47:37.389016  117334 system_pods.go:74] duration metric: took 6.663946ms to wait for pod list to return data ...
	I0919 22:47:37.389022  117334 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:47:37.392401  117334 default_sa.go:45] found service account: "default"
	I0919 22:47:37.392424  117334 default_sa.go:55] duration metric: took 3.397243ms for default service account to be created ...
	I0919 22:47:37.392433  117334 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:47:37.399599  117334 system_pods.go:86] 24 kube-system pods found
	I0919 22:47:37.399633  117334 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:47:37.399642  117334 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:47:37.399653  117334 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:47:37.399658  117334 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:47:37.399662  117334 system_pods.go:89] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running
	I0919 22:47:37.399666  117334 system_pods.go:89] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:47:37.399669  117334 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:47:37.399672  117334 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:47:37.399677  117334 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:47:37.399683  117334 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:47:37.399687  117334 system_pods.go:89] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running
	I0919 22:47:37.399694  117334 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:47:37.399699  117334 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:47:37.399705  117334 system_pods.go:89] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running
	I0919 22:47:37.399711  117334 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:47:37.399716  117334 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:47:37.399721  117334 system_pods.go:89] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:47:37.399725  117334 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:47:37.399731  117334 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:47:37.399735  117334 system_pods.go:89] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running
	I0919 22:47:37.399738  117334 system_pods.go:89] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:47:37.399742  117334 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:47:37.399746  117334 system_pods.go:89] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:47:37.399749  117334 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:47:37.399759  117334 system_pods.go:126] duration metric: took 7.320503ms to wait for k8s-apps to be running ...
	I0919 22:47:37.399765  117334 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:47:37.399808  117334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:47:37.412914  117334 system_svc.go:56] duration metric: took 13.132784ms WaitForService to wait for kubelet
	I0919 22:47:37.412941  117334 kubeadm.go:578] duration metric: took 208.206141ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:47:37.412955  117334 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:47:37.416336  117334 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:47:37.416362  117334 node_conditions.go:123] node cpu capacity is 8
	I0919 22:47:37.416374  117334 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:47:37.416378  117334 node_conditions.go:123] node cpu capacity is 8
	I0919 22:47:37.416382  117334 node_conditions.go:105] duration metric: took 3.422712ms to run NodePressure ...
	I0919 22:47:37.416393  117334 start.go:241] waiting for startup goroutines ...
	I0919 22:47:37.416414  117334 start.go:255] writing updated cluster config ...
	I0919 22:47:37.418704  117334 out.go:203] 
	I0919 22:47:37.420426  117334 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:37.420560  117334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:47:37.422628  117334 out.go:179] * Starting "ha-326307-m04" worker node in "ha-326307" cluster
	I0919 22:47:37.424537  117334 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:47:37.426046  117334 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:47:37.427281  117334 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:47:37.427308  117334 cache.go:58] Caching tarball of preloaded images
	I0919 22:47:37.427343  117334 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:47:37.427431  117334 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:47:37.427448  117334 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:47:37.427555  117334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:47:37.449457  117334 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:47:37.449492  117334 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:47:37.449508  117334 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:47:37.449543  117334 start.go:360] acquireMachinesLock for ha-326307-m04: {Name:mk65e4f546dae46b7c7a6cfe6f590e09a0a01676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:47:37.449601  117334 start.go:364] duration metric: took 44.457µs to acquireMachinesLock for "ha-326307-m04"
	I0919 22:47:37.449624  117334 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:47:37.449630  117334 fix.go:54] fixHost starting: m04
	I0919 22:47:37.449822  117334 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:47:37.469296  117334 fix.go:112] recreateIfNeeded on ha-326307-m04: state=Stopped err=<nil>
	W0919 22:47:37.469328  117334 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:47:37.472893  117334 out.go:252] * Restarting existing docker container for "ha-326307-m04" ...
	I0919 22:47:37.473037  117334 cli_runner.go:164] Run: docker start ha-326307-m04
	I0919 22:47:37.730215  117334 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:47:37.769395  117334 kic.go:430] container "ha-326307-m04" state is running.
	I0919 22:47:37.769860  117334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m04
	I0919 22:47:37.802691  117334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:47:37.803232  117334 machine.go:93] provisionDockerMachine start ...
	I0919 22:47:37.803452  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	I0919 22:47:37.830966  117334 main.go:141] libmachine: Using SSH client type: native
	I0919 22:47:37.831267  117334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32854 <nil> <nil>}
	I0919 22:47:37.831280  117334 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:47:37.832368  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40452->127.0.0.1:32854: read: connection reset by peer
	I0919 22:47:40.870002  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:43.907381  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:46.957770  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:49.994121  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:53.032142  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:56.070745  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:59.108416  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:02.147842  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:05.186639  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:08.223489  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:11.260279  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:14.297886  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:17.336139  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:20.372068  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:23.408593  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:26.447629  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:29.485125  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:32.522879  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:35.561474  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:38.597754  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:41.635956  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:44.673554  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:47.712342  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:50.749576  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:53.787102  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:56.825425  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:59.862260  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:02.899291  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:05.938332  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:08.975744  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:12.015641  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:15.054493  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:18.091218  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:21.132315  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:24.170051  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:27.208961  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:30.248209  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:33.285497  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:36.323122  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:39.360791  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:42.398655  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:45.436612  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:48.473310  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:51.510574  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:54.549231  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:57.586924  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:00.625036  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:03.663968  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:06.702355  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:09.739425  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:12.775624  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:15.814726  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:18.852079  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:21.891087  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:24.931250  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:27.968596  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:31.006284  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:34.044202  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:37.083109  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:40.084266  117334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:50:40.084324  117334 ubuntu.go:182] provisioning hostname "ha-326307-m04"
	I0919 22:50:40.084394  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	I0919 22:50:40.104999  117334 main.go:141] libmachine: Using SSH client type: native
	I0919 22:50:40.105261  117334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32854 <nil> <nil>}
	I0919 22:50:40.105277  117334 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m04 && echo "ha-326307-m04" | sudo tee /etc/hostname
	I0919 22:50:40.142494  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:43.179723  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:46.219078  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:49.257468  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:52.294725  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:55.333453  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:58.369775  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:01.410204  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:04.447539  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:07.484969  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:10.522879  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:13.560883  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:16.598958  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:19.636636  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:22.675329  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:25.715661  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:28.752029  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:31.789804  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:34.827282  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:37.864801  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:40.902097  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:43.938239  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:46.977234  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:50.013273  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:53.050179  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:56.088669  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:59.125751  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:02.164113  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:05.202127  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:08.238804  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:11.276877  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:14.314627  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:17.352864  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:20.390447  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:23.426725  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:26.464453  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:29.501151  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:32.537422  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:35.576049  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:38.612605  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:41.651300  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:44.689779  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:47.727687  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:50.765824  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:53.803525  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:56.842262  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:59.879359  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:02.917098  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:05.954879  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:08.991550  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:12.029919  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 -p ha-326307 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-326307
helpers_test.go:243: (dbg) docker inspect ha-326307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	        "Created": "2025-09-19T22:23:18.619000062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 117526,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:47:22.643303586Z",
	            "FinishedAt": "2025-09-19T22:47:21.854788404Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/hosts",
	        "LogPath": "/var/lib/docker/containers/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3/5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3-json.log",
	        "Name": "/ha-326307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-326307:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-326307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e0f1fe86b0818450b29e917af8d9dda81e310353b20f615454f610bda0c56f3",
	                "LowerDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30bd57649629d477287235001469bfd41a7805ca0a999738e74eba07285cd630/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-326307",
	                "Source": "/var/lib/docker/volumes/ha-326307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-326307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-326307",
	                "name.minikube.sigs.k8s.io": "ha-326307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5d98b4998362c3c6eb9475e2dc63e93250096e886393fc8c0446e01121aa55de",
	            "SandboxKey": "/var/run/docker/netns/5d98b4998362",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32848"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-326307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:e4:60:9c:1f:e9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "465af21e2d8d112a34f14f7c3ab89eac4e6e57f582ff8e32b514381f55dd085e",
	                    "EndpointID": "73775f92882808eac2060d1cd6cdf5fc68b2a25566f95c3111f042bf33ce67ba",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-326307",
	                        "5e0f1fe86b08"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-326307 -n ha-326307
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-326307 logs -n 25: (1.591377537s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-326307 cp ha-326307-m03:/home/docker/cp-test.txt ha-326307-m04:/home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test_ha-326307-m03_ha-326307-m04.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp testdata/cp-test.txt ha-326307-m04:/home/docker/cp-test.txt                                                            │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile865153148/001/cp-test_ha-326307-m04.txt │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307:/home/docker/cp-test_ha-326307-m04_ha-326307.txt                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307.txt                                                │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m02:/home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m02 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m02.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ cp      │ ha-326307 cp ha-326307-m04:/home/docker/cp-test.txt ha-326307-m03:/home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt              │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ ssh     │ ha-326307 ssh -n ha-326307-m03 sudo cat /home/docker/cp-test_ha-326307-m04_ha-326307-m03.txt                                        │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ node    │ ha-326307 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ node    │ ha-326307 node start m02 --alsologtostderr -v 5                                                                                     │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │ 19 Sep 25 22:38 UTC │
	│ node    │ ha-326307 node list --alsologtostderr -v 5                                                                                          │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:39 UTC │                     │
	│ stop    │ ha-326307 stop --alsologtostderr -v 5                                                                                               │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:39 UTC │ 19 Sep 25 22:40 UTC │
	│ start   │ ha-326307 start --wait true --alsologtostderr -v 5                                                                                  │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:40 UTC │                     │
	│ node    │ ha-326307 node list --alsologtostderr -v 5                                                                                          │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:46 UTC │                     │
	│ node    │ ha-326307 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:46 UTC │ 19 Sep 25 22:46 UTC │
	│ stop    │ ha-326307 stop --alsologtostderr -v 5                                                                                               │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:46 UTC │ 19 Sep 25 22:47 UTC │
	│ start   │ ha-326307 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd                                  │ ha-326307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:47:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:47:22.393632  117334 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:47:22.393921  117334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:47:22.393933  117334 out.go:374] Setting ErrFile to fd 2...
	I0919 22:47:22.393938  117334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:47:22.394221  117334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:47:22.394719  117334 out.go:368] Setting JSON to false
	I0919 22:47:22.395662  117334 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5386,"bootTime":1758316656,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:47:22.395761  117334 start.go:140] virtualization: kvm guest
	I0919 22:47:22.398317  117334 out.go:179] * [ha-326307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:47:22.399944  117334 notify.go:220] Checking for updates...
	I0919 22:47:22.399959  117334 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:47:22.401898  117334 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:47:22.403996  117334 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:47:22.405830  117334 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 22:47:22.407112  117334 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:47:22.408951  117334 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:47:22.410843  117334 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:22.411324  117334 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:47:22.437572  117334 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:47:22.437728  117334 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:47:22.496139  117334 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:47:22.484010804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:47:22.496324  117334 docker.go:318] overlay module found
	I0919 22:47:22.498896  117334 out.go:179] * Using the docker driver based on existing profile
	I0919 22:47:22.500625  117334 start.go:304] selected driver: docker
	I0919 22:47:22.500654  117334 start.go:918] validating driver "docker" against &{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false
kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:47:22.500818  117334 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:47:22.500921  117334 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:47:22.561642  117334 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:47:22.55133994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:47:22.562343  117334 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:47:22.562375  117334 cni.go:84] Creating CNI manager for ""
	I0919 22:47:22.562446  117334 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 22:47:22.562497  117334 start.go:348] cluster config:
	{Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I0919 22:47:22.564745  117334 out.go:179] * Starting "ha-326307" primary control-plane node in "ha-326307" cluster
	I0919 22:47:22.566242  117334 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:47:22.567765  117334 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:47:22.569680  117334 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:47:22.569714  117334 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:47:22.569742  117334 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 22:47:22.569758  117334 cache.go:58] Caching tarball of preloaded images
	I0919 22:47:22.569871  117334 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:47:22.569882  117334 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:47:22.570000  117334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:47:22.592300  117334 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:47:22.592321  117334 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:47:22.592343  117334 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:47:22.592366  117334 start.go:360] acquireMachinesLock for ha-326307: {Name:mk42b79b90944aab63c8b37c2f94e04ca1ebec1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:47:22.592454  117334 start.go:364] duration metric: took 68.019µs to acquireMachinesLock for "ha-326307"
	I0919 22:47:22.592478  117334 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:47:22.592483  117334 fix.go:54] fixHost starting: 
	I0919 22:47:22.592723  117334 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:47:22.611665  117334 fix.go:112] recreateIfNeeded on ha-326307: state=Stopped err=<nil>
	W0919 22:47:22.611692  117334 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:47:22.613846  117334 out.go:252] * Restarting existing docker container for "ha-326307" ...
	I0919 22:47:22.613926  117334 cli_runner.go:164] Run: docker start ha-326307
	I0919 22:47:22.875579  117334 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:47:22.895349  117334 kic.go:430] container "ha-326307" state is running.
	I0919 22:47:22.895752  117334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:47:22.915818  117334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:47:22.916071  117334 machine.go:93] provisionDockerMachine start ...
	I0919 22:47:22.916129  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:22.937975  117334 main.go:141] libmachine: Using SSH client type: native
	I0919 22:47:22.938274  117334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32844 <nil> <nil>}
	I0919 22:47:22.938293  117334 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:47:22.938928  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36440->127.0.0.1:32844: read: connection reset by peer
	I0919 22:47:26.080350  117334 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:47:26.080445  117334 ubuntu.go:182] provisioning hostname "ha-326307"
	I0919 22:47:26.080532  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:26.100397  117334 main.go:141] libmachine: Using SSH client type: native
	I0919 22:47:26.100707  117334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32844 <nil> <nil>}
	I0919 22:47:26.100724  117334 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307 && echo "ha-326307" | sudo tee /etc/hostname
	I0919 22:47:26.252874  117334 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307
	
	I0919 22:47:26.252970  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:26.272439  117334 main.go:141] libmachine: Using SSH client type: native
	I0919 22:47:26.272712  117334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32844 <nil> <nil>}
	I0919 22:47:26.272732  117334 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:47:26.413385  117334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:47:26.413420  117334 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:47:26.413463  117334 ubuntu.go:190] setting up certificates
	I0919 22:47:26.413474  117334 provision.go:84] configureAuth start
	I0919 22:47:26.413529  117334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:47:26.434202  117334 provision.go:143] copyHostCerts
	I0919 22:47:26.434243  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:47:26.434284  117334 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:47:26.434299  117334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:47:26.434378  117334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:47:26.434493  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:47:26.434519  117334 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:47:26.434534  117334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:47:26.434569  117334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:47:26.434680  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:47:26.434709  117334 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:47:26.434716  117334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:47:26.434744  117334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:47:26.434809  117334 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307 san=[127.0.0.1 192.168.49.2 ha-326307 localhost minikube]
	I0919 22:47:26.537252  117334 provision.go:177] copyRemoteCerts
	I0919 22:47:26.537320  117334 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:47:26.537353  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:26.556867  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32844 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:47:26.655613  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:47:26.655675  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:47:26.683834  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:47:26.683899  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 22:47:26.710194  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:47:26.710259  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:47:26.736950  117334 provision.go:87] duration metric: took 323.462377ms to configureAuth
	I0919 22:47:26.736983  117334 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:47:26.737245  117334 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:26.737259  117334 machine.go:96] duration metric: took 3.821173876s to provisionDockerMachine
	I0919 22:47:26.737266  117334 start.go:293] postStartSetup for "ha-326307" (driver="docker")
	I0919 22:47:26.737277  117334 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:47:26.737316  117334 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:47:26.737349  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:26.756463  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32844 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:47:26.856735  117334 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:47:26.861231  117334 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:47:26.861308  117334 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:47:26.861330  117334 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:47:26.861339  117334 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:47:26.861357  117334 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:47:26.861422  117334 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:47:26.861501  117334 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:47:26.861507  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:47:26.861601  117334 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:47:26.872018  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:47:26.900025  117334 start.go:296] duration metric: took 162.740511ms for postStartSetup
	I0919 22:47:26.900125  117334 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:47:26.900202  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:26.919327  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32844 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:47:27.014623  117334 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:47:27.019402  117334 fix.go:56] duration metric: took 4.426890073s for fixHost
	I0919 22:47:27.019432  117334 start.go:83] releasing machines lock for "ha-326307", held for 4.42696261s
	I0919 22:47:27.019501  117334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307
	I0919 22:47:27.039853  117334 ssh_runner.go:195] Run: cat /version.json
	I0919 22:47:27.039910  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:27.039957  117334 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:47:27.040034  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:27.063354  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32844 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:47:27.063685  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32844 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:47:27.159565  117334 ssh_runner.go:195] Run: systemctl --version
	I0919 22:47:27.242250  117334 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:47:27.248258  117334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:47:27.271610  117334 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:47:27.271703  117334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:47:27.285297  117334 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:47:27.285327  117334 start.go:495] detecting cgroup driver to use...
	I0919 22:47:27.285358  117334 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:47:27.285529  117334 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:47:27.302886  117334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:47:27.317247  117334 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:47:27.317299  117334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:47:27.332592  117334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:47:27.345247  117334 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:47:27.413652  117334 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:47:27.478751  117334 docker.go:234] disabling docker service ...
	I0919 22:47:27.478821  117334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:47:27.492359  117334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:47:27.505121  117334 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:47:27.571115  117334 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:47:27.637412  117334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:47:27.650858  117334 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:47:27.670199  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:47:27.681859  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:47:27.693661  117334 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:47:27.693746  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:47:27.705304  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:47:27.716745  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:47:27.728889  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:47:27.740462  117334 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:47:27.750962  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:47:27.762172  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:47:27.773910  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:47:27.785724  117334 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:47:27.795637  117334 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:47:27.805881  117334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:47:27.871569  117334 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:47:28.003543  117334 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:47:28.003622  117334 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:47:28.008094  117334 start.go:563] Will wait 60s for crictl version
	I0919 22:47:28.008181  117334 ssh_runner.go:195] Run: which crictl
	I0919 22:47:28.012000  117334 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:47:28.047417  117334 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:47:28.047490  117334 ssh_runner.go:195] Run: containerd --version
	I0919 22:47:28.075836  117334 ssh_runner.go:195] Run: containerd --version
	I0919 22:47:28.105440  117334 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:47:28.107140  117334 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:47:28.125639  117334 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:47:28.129843  117334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:47:28.142430  117334 kubeadm.go:875] updating cluster {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socke
tVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:47:28.142571  117334 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:47:28.142617  117334 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:47:28.178090  117334 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:47:28.178110  117334 containerd.go:534] Images already preloaded, skipping extraction
	I0919 22:47:28.178173  117334 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:47:28.213217  117334 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 22:47:28.213237  117334 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:47:28.213244  117334 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0919 22:47:28.213345  117334 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:47:28.213394  117334 ssh_runner.go:195] Run: sudo crictl info
	I0919 22:47:28.248095  117334 cni.go:84] Creating CNI manager for ""
	I0919 22:47:28.248113  117334 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 22:47:28.248121  117334 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:47:28.248141  117334 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-326307 NodeName:ha-326307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:47:28.248259  117334 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-326307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:47:28.248277  117334 kube-vip.go:115] generating kube-vip config ...
	I0919 22:47:28.248312  117334 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:47:28.261292  117334 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:47:28.261387  117334 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:47:28.261450  117334 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:47:28.270966  117334 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:47:28.271022  117334 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:47:28.280464  117334 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 22:47:28.299487  117334 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:47:28.318443  117334 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0919 22:47:28.337895  117334 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:47:28.357421  117334 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:47:28.361103  117334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:47:28.373457  117334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:47:28.437086  117334 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:47:28.466071  117334 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.2
	I0919 22:47:28.466093  117334 certs.go:194] generating shared ca certs ...
	I0919 22:47:28.466112  117334 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:47:28.466289  117334 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:47:28.466330  117334 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:47:28.466341  117334 certs.go:256] generating profile certs ...
	I0919 22:47:28.466443  117334 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:47:28.466473  117334 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.fd4d23c7
	I0919 22:47:28.466487  117334 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.fd4d23c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:47:28.652830  117334 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.fd4d23c7 ...
	I0919 22:47:28.652864  117334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.fd4d23c7: {Name:mkb11519fb8d768bcdcc882a07bc4392b845764e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:47:28.653069  117334 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.fd4d23c7 ...
	I0919 22:47:28.653093  117334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.fd4d23c7: {Name:mkbf9a9f521b248ec2b23cdf5c175cff4ab045bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:47:28.653248  117334 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt.fd4d23c7 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt
	I0919 22:47:28.653439  117334 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.fd4d23c7 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key
	I0919 22:47:28.653628  117334 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:47:28.653647  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:47:28.653665  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:47:28.653684  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:47:28.653702  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:47:28.653718  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:47:28.653735  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:47:28.653757  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:47:28.653776  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:47:28.653837  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:47:28.653876  117334 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:47:28.653891  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:47:28.653922  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:47:28.653953  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:47:28.654046  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:47:28.654109  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:47:28.654183  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:47:28.654206  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:47:28.654224  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:47:28.654947  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:47:28.687144  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:47:28.717147  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:47:28.746276  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:47:28.773744  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 22:47:28.802690  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:47:28.830097  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:47:28.857601  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:47:28.884331  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:47:28.911171  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:47:28.939294  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:47:28.967356  117334 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:47:28.988237  117334 ssh_runner.go:195] Run: openssl version
	I0919 22:47:28.994799  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:47:29.009817  117334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:47:29.016062  117334 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:47:29.016129  117334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:47:29.027090  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:47:29.041077  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:47:29.056239  117334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:47:29.062303  117334 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:47:29.062372  117334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:47:29.072410  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:47:29.086885  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:47:29.104789  117334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:47:29.110276  117334 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:47:29.110343  117334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:47:29.121706  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:47:29.134943  117334 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:47:29.142169  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:47:29.154102  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:47:29.164446  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:47:29.173875  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:47:29.185007  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:47:29.192558  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:47:29.205365  117334 kubeadm.go:392] StartCluster: {Name:ha-326307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:47:29.205502  117334 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 22:47:29.205578  117334 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:47:29.264798  117334 cri.go:89] found id: "d2382f2921e1e078e07b4c391b7a15c8be51e2d9202cab3eff086a9d2dc79f18"
	I0919 22:47:29.264825  117334 cri.go:89] found id: "3a0f88ac96e637fd1a25a3a9d355ee87788a1c8050a80984c4abad203a1e14cd"
	I0919 22:47:29.264831  117334 cri.go:89] found id: "1373762f48215f505ded621a9917f9e9deda40b2815f0cbda59d8457cd2e760e"
	I0919 22:47:29.264836  117334 cri.go:89] found id: "d64ccf0dc7b40cde117eacc30ca9bbbf7a6b3a0ee53c1d14cce1120da436c90c"
	I0919 22:47:29.264840  117334 cri.go:89] found id: "2368bad9e0ff41be487b38fe2174f4d5d39df2d577f9274cb01bd1725b563fbb"
	I0919 22:47:29.264844  117334 cri.go:89] found id: "b1e652a991900c99afd1da5b6ebb61c2bdc03afb0dc96c44daf18ff7674dd987"
	I0919 22:47:29.264849  117334 cri.go:89] found id: "bb7d0d80b9c2303a244cfffc060085bd15528b57546277cdaaaf2dd707b68f1f"
	I0919 22:47:29.264852  117334 cri.go:89] found id: "fea1c0534d95d8681a40f476ef920c8ced5eb8897a63d871e66830a2e35509fc"
	I0919 22:47:29.264856  117334 cri.go:89] found id: "fff949799c16ffb392a665b0e5af2f326948a468e2495b8ea2fa176e06b5cfbf"
	I0919 22:47:29.264865  117334 cri.go:89] found id: "9b01ee2966e081085b732d62e68985fd9249574188499e7e99fa53ff3e585c2d"
	I0919 22:47:29.264868  117334 cri.go:89] found id: "c1e4cc3b9a7f1259a1339b951fd30079b99dc7acedc895c7ae90814405daad16"
	I0919 22:47:29.264872  117334 cri.go:89] found id: "63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284"
	I0919 22:47:29.264876  117334 cri.go:89] found id: "7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c"
	I0919 22:47:29.264880  117334 cri.go:89] found id: "c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6"
	I0919 22:47:29.264883  117334 cri.go:89] found id: "e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5"
	I0919 22:47:29.264898  117334 cri.go:89] found id: ""
	I0919 22:47:29.264945  117334 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0919 22:47:29.298635  117334 cri.go:116] JSON = [{"ociVersion":"1.2.0","id":"1373762f48215f505ded621a9917f9e9deda40b2815f0cbda59d8457cd2e760e","pid":1226,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1373762f48215f505ded621a9917f9e9deda40b2815f0cbda59d8457cd2e760e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1373762f48215f505ded621a9917f9e9deda40b2815f0cbda59d8457cd2e760e/rootfs","created":"2025-09-19T22:47:29.253988391Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri.sandbox-id":"f294a22440ffca75fd2e9f6848a1160394ef6c37e08deda35e634212572736bb","io.kubernetes.cri.sandbox-name":"kube-scheduler-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"02be84f36b44ed11e0db130395870414"},"owner":"root"},{"ociVersion":"1.2.0","id":"1b70ed4ca2d4c10e6e16e99b0
60e92540e12c577ce382140a6b3a103bfd24379","pid":1046,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b70ed4ca2d4c10e6e16e99b060e92540e12c577ce382140a6b3a103bfd24379","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b70ed4ca2d4c10e6e16e99b060e92540e12c577ce382140a6b3a103bfd24379/rootfs","created":"2025-09-19T22:47:29.114522233Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"1b70ed4ca2d4c10e6e16e99b060e92540e12c577ce382140a6b3a103bfd24379","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-ha-326307_57c850ed4c5abebc96f109c9dc04f98c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"57c850ed4c5abebc96f109
c9dc04f98c"},"owner":"root"},{"ociVersion":"1.2.0","id":"2368bad9e0ff41be487b38fe2174f4d5d39df2d577f9274cb01bd1725b563fbb","pid":1176,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2368bad9e0ff41be487b38fe2174f4d5d39df2d577f9274cb01bd1725b563fbb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2368bad9e0ff41be487b38fe2174f4d5d39df2d577f9274cb01bd1725b563fbb/rootfs","created":"2025-09-19T22:47:29.22978844Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri.sandbox-id":"b8b1b6232caadfb96c901dc4b98663802bb63f8c256f83649d7e19a13bd21eda","io.kubernetes.cri.sandbox-name":"kube-apiserver-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f6c96a149704fe94a8f3f9671ba1a8ff"},"owner":"root"},{"ociVersion":"1.2.0","id":"3a0f88ac96e637fd1a25a3a9d355ee87788a1c8050a80984c4abad20
3a1e14cd","pid":1247,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a0f88ac96e637fd1a25a3a9d355ee87788a1c8050a80984c4abad203a1e14cd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a0f88ac96e637fd1a25a3a9d355ee87788a1c8050a80984c4abad203a1e14cd/rootfs","created":"2025-09-19T22:47:29.283061875Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri.sandbox-id":"1b70ed4ca2d4c10e6e16e99b060e92540e12c577ce382140a6b3a103bfd24379","io.kubernetes.cri.sandbox-name":"kube-controller-manager-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"57c850ed4c5abebc96f109c9dc04f98c"},"owner":"root"},{"ociVersion":"1.2.0","id":"8b072a1ef0aefde029cac171fe575d2b9617cf6a1ddd78d416c920d48e41c704","pid":1054,"status":"running","bundle":"/run/containerd/io.containerd.runti
me.v2.task/k8s.io/8b072a1ef0aefde029cac171fe575d2b9617cf6a1ddd78d416c920d48e41c704","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b072a1ef0aefde029cac171fe575d2b9617cf6a1ddd78d416c920d48e41c704/rootfs","created":"2025-09-19T22:47:29.115043342Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"8b072a1ef0aefde029cac171fe575d2b9617cf6a1ddd78d416c920d48e41c704","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-vip-ha-326307_11fc7e0ddcb5f54efe3aa73e9d205abc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-vip-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"11fc7e0ddcb5f54efe3aa73e9d205abc"},"owner":"root"},{"ociVersion":"1.2.0","id":"a02766bee21204caf69ae959a43e2c1ba579b3ddfcc1698e8bf28a0c454da099","pid":998,"status":"runni
ng","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a02766bee21204caf69ae959a43e2c1ba579b3ddfcc1698e8bf28a0c454da099","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a02766bee21204caf69ae959a43e2c1ba579b3ddfcc1698e8bf28a0c454da099/rootfs","created":"2025-09-19T22:47:29.079313993Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a02766bee21204caf69ae959a43e2c1ba579b3ddfcc1698e8bf28a0c454da099","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-ha-326307_044bbdcbe96821df073716c7f05fb17d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"044bbdcbe96821df073716c7f05fb17d"},"owner":"root"},{"ociVersion":"1.2.0","id":"b8b1b6232caadfb96c901dc4b98663802bb63f8c256f8364
9d7e19a13bd21eda","pid":1007,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8b1b6232caadfb96c901dc4b98663802bb63f8c256f83649d7e19a13bd21eda","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8b1b6232caadfb96c901dc4b98663802bb63f8c256f83649d7e19a13bd21eda/rootfs","created":"2025-09-19T22:47:29.081142893Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"b8b1b6232caadfb96c901dc4b98663802bb63f8c256f83649d7e19a13bd21eda","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-ha-326307_f6c96a149704fe94a8f3f9671ba1a8ff","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f6c96a149704fe94a8f3f9671ba1a8ff"},"owner":"root"},{"ociVersion
":"1.2.0","id":"d2382f2921e1e078e07b4c391b7a15c8be51e2d9202cab3eff086a9d2dc79f18","pid":1240,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2382f2921e1e078e07b4c391b7a15c8be51e2d9202cab3eff086a9d2dc79f18","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2382f2921e1e078e07b4c391b7a15c8be51e2d9202cab3eff086a9d2dc79f18/rootfs","created":"2025-09-19T22:47:29.265888027Z","annotations":{"io.kubernetes.cri.container-name":"kube-vip","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri.sandbox-id":"8b072a1ef0aefde029cac171fe575d2b9617cf6a1ddd78d416c920d48e41c704","io.kubernetes.cri.sandbox-name":"kube-vip-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"11fc7e0ddcb5f54efe3aa73e9d205abc"},"owner":"root"},{"ociVersion":"1.2.0","id":"d64ccf0dc7b40cde117eacc30ca9bbbf7a6b3a0ee53c1d14cce1120da436c90c","pid":1165,"status":"running","bundle":"/run/con
tainerd/io.containerd.runtime.v2.task/k8s.io/d64ccf0dc7b40cde117eacc30ca9bbbf7a6b3a0ee53c1d14cce1120da436c90c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d64ccf0dc7b40cde117eacc30ca9bbbf7a6b3a0ee53c1d14cce1120da436c90c/rootfs","created":"2025-09-19T22:47:29.231195148Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"a02766bee21204caf69ae959a43e2c1ba579b3ddfcc1698e8bf28a0c454da099","io.kubernetes.cri.sandbox-name":"etcd-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"044bbdcbe96821df073716c7f05fb17d"},"owner":"root"},{"ociVersion":"1.2.0","id":"f294a22440ffca75fd2e9f6848a1160394ef6c37e08deda35e634212572736bb","pid":1034,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f294a22440ffca75fd2e9f6848a1160394ef6c37e08deda35e634212572736bb","rootfs":"/run/containerd/io.co
ntainerd.runtime.v2.task/k8s.io/f294a22440ffca75fd2e9f6848a1160394ef6c37e08deda35e634212572736bb/rootfs","created":"2025-09-19T22:47:29.105362182Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f294a22440ffca75fd2e9f6848a1160394ef6c37e08deda35e634212572736bb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-ha-326307_02be84f36b44ed11e0db130395870414","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-ha-326307","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"02be84f36b44ed11e0db130395870414"},"owner":"root"}]
	I0919 22:47:29.298835  117334 cri.go:126] list returned 10 containers
	I0919 22:47:29.298850  117334 cri.go:129] container: {ID:1373762f48215f505ded621a9917f9e9deda40b2815f0cbda59d8457cd2e760e Status:created}
	I0919 22:47:29.298890  117334 cri.go:135] skipping {1373762f48215f505ded621a9917f9e9deda40b2815f0cbda59d8457cd2e760e created}: state = "created", want "paused"
	I0919 22:47:29.298909  117334 cri.go:129] container: {ID:1b70ed4ca2d4c10e6e16e99b060e92540e12c577ce382140a6b3a103bfd24379 Status:running}
	I0919 22:47:29.298925  117334 cri.go:131] skipping 1b70ed4ca2d4c10e6e16e99b060e92540e12c577ce382140a6b3a103bfd24379 - not in ps
	I0919 22:47:29.298937  117334 cri.go:129] container: {ID:2368bad9e0ff41be487b38fe2174f4d5d39df2d577f9274cb01bd1725b563fbb Status:running}
	I0919 22:47:29.298946  117334 cri.go:135] skipping {2368bad9e0ff41be487b38fe2174f4d5d39df2d577f9274cb01bd1725b563fbb running}: state = "running", want "paused"
	I0919 22:47:29.298952  117334 cri.go:129] container: {ID:3a0f88ac96e637fd1a25a3a9d355ee87788a1c8050a80984c4abad203a1e14cd Status:created}
	I0919 22:47:29.298962  117334 cri.go:135] skipping {3a0f88ac96e637fd1a25a3a9d355ee87788a1c8050a80984c4abad203a1e14cd created}: state = "created", want "paused"
	I0919 22:47:29.298968  117334 cri.go:129] container: {ID:8b072a1ef0aefde029cac171fe575d2b9617cf6a1ddd78d416c920d48e41c704 Status:running}
	I0919 22:47:29.298975  117334 cri.go:131] skipping 8b072a1ef0aefde029cac171fe575d2b9617cf6a1ddd78d416c920d48e41c704 - not in ps
	I0919 22:47:29.298980  117334 cri.go:129] container: {ID:a02766bee21204caf69ae959a43e2c1ba579b3ddfcc1698e8bf28a0c454da099 Status:running}
	I0919 22:47:29.298984  117334 cri.go:131] skipping a02766bee21204caf69ae959a43e2c1ba579b3ddfcc1698e8bf28a0c454da099 - not in ps
	I0919 22:47:29.298989  117334 cri.go:129] container: {ID:b8b1b6232caadfb96c901dc4b98663802bb63f8c256f83649d7e19a13bd21eda Status:running}
	I0919 22:47:29.298995  117334 cri.go:131] skipping b8b1b6232caadfb96c901dc4b98663802bb63f8c256f83649d7e19a13bd21eda - not in ps
	I0919 22:47:29.299001  117334 cri.go:129] container: {ID:d2382f2921e1e078e07b4c391b7a15c8be51e2d9202cab3eff086a9d2dc79f18 Status:running}
	I0919 22:47:29.299015  117334 cri.go:135] skipping {d2382f2921e1e078e07b4c391b7a15c8be51e2d9202cab3eff086a9d2dc79f18 running}: state = "running", want "paused"
	I0919 22:47:29.299028  117334 cri.go:129] container: {ID:d64ccf0dc7b40cde117eacc30ca9bbbf7a6b3a0ee53c1d14cce1120da436c90c Status:running}
	I0919 22:47:29.299033  117334 cri.go:135] skipping {d64ccf0dc7b40cde117eacc30ca9bbbf7a6b3a0ee53c1d14cce1120da436c90c running}: state = "running", want "paused"
	I0919 22:47:29.299047  117334 cri.go:129] container: {ID:f294a22440ffca75fd2e9f6848a1160394ef6c37e08deda35e634212572736bb Status:running}
	I0919 22:47:29.299054  117334 cri.go:131] skipping f294a22440ffca75fd2e9f6848a1160394ef6c37e08deda35e634212572736bb - not in ps
	I0919 22:47:29.299103  117334 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:47:29.313552  117334 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:47:29.313573  117334 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:47:29.313711  117334 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:47:29.328032  117334 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:47:29.328544  117334 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-326307" does not appear in /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:47:29.328687  117334 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14678/kubeconfig needs updating (will repair): [kubeconfig missing "ha-326307" cluster setting kubeconfig missing "ha-326307" context setting]
	I0919 22:47:29.329054  117334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:47:29.330465  117334 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:47:29.331017  117334 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:47:29.331119  117334 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:47:29.331255  117334 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:47:29.331294  117334 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:47:29.331318  117334 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:47:29.331333  117334 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:47:29.331799  117334 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:47:29.346171  117334 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:47:29.346200  117334 kubeadm.go:593] duration metric: took 32.620051ms to restartPrimaryControlPlane
	I0919 22:47:29.346212  117334 kubeadm.go:394] duration metric: took 140.858312ms to StartCluster
	I0919 22:47:29.346233  117334 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:47:29.346317  117334 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:47:29.346994  117334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:47:29.347231  117334 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:47:29.347255  117334 start.go:241] waiting for startup goroutines ...
	I0919 22:47:29.347272  117334 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:47:29.347482  117334 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:29.351116  117334 out.go:179] * Enabled addons: 
	I0919 22:47:29.353346  117334 addons.go:514] duration metric: took 6.07167ms for enable addons: enabled=[]
	I0919 22:47:29.353405  117334 start.go:246] waiting for cluster config update ...
	I0919 22:47:29.353417  117334 start.go:255] writing updated cluster config ...
	I0919 22:47:29.355660  117334 out.go:203] 
	I0919 22:47:29.359640  117334 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:29.359776  117334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:47:29.367412  117334 out.go:179] * Starting "ha-326307-m02" control-plane node in "ha-326307" cluster
	I0919 22:47:29.369430  117334 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:47:29.370763  117334 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:47:29.371887  117334 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:47:29.371912  117334 cache.go:58] Caching tarball of preloaded images
	I0919 22:47:29.371963  117334 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:47:29.372017  117334 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:47:29.372033  117334 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:47:29.372127  117334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:47:29.400033  117334 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:47:29.400059  117334 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:47:29.400074  117334 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:47:29.400097  117334 start.go:360] acquireMachinesLock for ha-326307-m02: {Name:mk4919a9b19250804b0f53d01bcd11efaf9a431f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:47:29.400165  117334 start.go:364] duration metric: took 44.585µs to acquireMachinesLock for "ha-326307-m02"
	I0919 22:47:29.400188  117334 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:47:29.400196  117334 fix.go:54] fixHost starting: m02
	I0919 22:47:29.400407  117334 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:47:29.422810  117334 fix.go:112] recreateIfNeeded on ha-326307-m02: state=Stopped err=<nil>
	W0919 22:47:29.422843  117334 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:47:29.424776  117334 out.go:252] * Restarting existing docker container for "ha-326307-m02" ...
	I0919 22:47:29.424858  117334 cli_runner.go:164] Run: docker start ha-326307-m02
	I0919 22:47:29.713119  117334 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:47:29.736526  117334 kic.go:430] container "ha-326307-m02" state is running.
	I0919 22:47:29.737086  117334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:47:29.760996  117334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:47:29.761508  117334 machine.go:93] provisionDockerMachine start ...
	I0919 22:47:29.761592  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:47:29.786125  117334 main.go:141] libmachine: Using SSH client type: native
	I0919 22:47:29.786524  117334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32849 <nil> <nil>}
	I0919 22:47:29.786544  117334 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:47:29.787500  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33254->127.0.0.1:32849: read: connection reset by peer
	I0919 22:47:32.925586  117334 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:47:32.925612  117334 ubuntu.go:182] provisioning hostname "ha-326307-m02"
	I0919 22:47:32.925675  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:47:32.944726  117334 main.go:141] libmachine: Using SSH client type: native
	I0919 22:47:32.944992  117334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32849 <nil> <nil>}
	I0919 22:47:32.945010  117334 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m02 && echo "ha-326307-m02" | sudo tee /etc/hostname
	I0919 22:47:33.096423  117334 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326307-m02
	
	I0919 22:47:33.096496  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:47:33.114711  117334 main.go:141] libmachine: Using SSH client type: native
	I0919 22:47:33.114944  117334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32849 <nil> <nil>}
	I0919 22:47:33.114969  117334 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326307-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326307-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326307-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:47:33.253218  117334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:47:33.253247  117334 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 22:47:33.253270  117334 ubuntu.go:190] setting up certificates
	I0919 22:47:33.253285  117334 provision.go:84] configureAuth start
	I0919 22:47:33.253337  117334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:47:33.278232  117334 provision.go:143] copyHostCerts
	I0919 22:47:33.278270  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:47:33.278301  117334 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 22:47:33.278314  117334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 22:47:33.278394  117334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 22:47:33.278487  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:47:33.278510  117334 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 22:47:33.278520  117334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 22:47:33.278558  117334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 22:47:33.278615  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:47:33.278637  117334 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 22:47:33.278645  117334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 22:47:33.278683  117334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 22:47:33.278747  117334 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.ha-326307-m02 san=[127.0.0.1 192.168.49.3 ha-326307-m02 localhost minikube]
	I0919 22:47:33.332031  117334 provision.go:177] copyRemoteCerts
	I0919 22:47:33.332083  117334 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:47:33.332131  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:47:33.354921  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:47:33.462587  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:47:33.462724  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:47:33.493993  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:47:33.494059  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:47:33.531622  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:47:33.531687  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:47:33.568741  117334 provision.go:87] duration metric: took 315.438937ms to configureAuth
	I0919 22:47:33.568793  117334 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:47:33.569097  117334 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:33.569112  117334 machine.go:96] duration metric: took 3.807571867s to provisionDockerMachine
	I0919 22:47:33.569121  117334 start.go:293] postStartSetup for "ha-326307-m02" (driver="docker")
	I0919 22:47:33.569133  117334 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:47:33.569229  117334 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:47:33.569284  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:47:33.595481  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:47:33.707066  117334 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:47:33.712405  117334 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:47:33.712461  117334 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:47:33.712475  117334 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:47:33.712488  117334 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:47:33.712501  117334 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 22:47:33.712564  117334 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 22:47:33.712671  117334 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 22:47:33.712686  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /etc/ssl/certs/182102.pem
	I0919 22:47:33.712807  117334 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:47:33.725851  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:47:33.760705  117334 start.go:296] duration metric: took 191.567136ms for postStartSetup
	I0919 22:47:33.760799  117334 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:47:33.760847  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:47:33.786454  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:47:33.886966  117334 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:47:33.892336  117334 fix.go:56] duration metric: took 4.492132883s for fixHost
	I0919 22:47:33.892363  117334 start.go:83] releasing machines lock for "ha-326307-m02", held for 4.492187006s
	I0919 22:47:33.892445  117334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m02
	I0919 22:47:33.918146  117334 out.go:179] * Found network options:
	I0919 22:47:33.919682  117334 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:47:33.920993  117334 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:47:33.921046  117334 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:47:33.921133  117334 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:47:33.921217  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:47:33.921236  117334 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:47:33.921296  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m02
	I0919 22:47:33.942856  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:47:33.944992  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307-m02/id_rsa Username:docker}
	I0919 22:47:34.044494  117334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:47:34.146655  117334 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:47:34.146752  117334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:47:34.157411  117334 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:47:34.157460  117334 start.go:495] detecting cgroup driver to use...
	I0919 22:47:34.157498  117334 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:47:34.157577  117334 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 22:47:34.173690  117334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:47:34.187634  117334 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:47:34.187699  117334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:47:34.216342  117334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:47:34.232606  117334 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:47:34.446768  117334 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:47:34.602403  117334 docker.go:234] disabling docker service ...
	I0919 22:47:34.602480  117334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:47:34.623629  117334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:47:34.643560  117334 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:47:34.782267  117334 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:47:34.931791  117334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:47:34.957097  117334 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:47:34.994360  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:47:35.017010  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:47:35.036523  117334 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:47:35.036620  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:47:35.066491  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:47:35.083515  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:47:35.103774  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:47:35.125189  117334 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:47:35.139400  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:47:35.159122  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:47:35.174784  117334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:47:35.189042  117334 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:47:35.203756  117334 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:47:35.218480  117334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:47:35.373466  117334 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:47:35.787529  117334 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 22:47:35.787609  117334 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 22:47:35.792630  117334 start.go:563] Will wait 60s for crictl version
	I0919 22:47:35.792696  117334 ssh_runner.go:195] Run: which crictl
	I0919 22:47:35.797656  117334 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:47:35.849084  117334 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 22:47:35.849174  117334 ssh_runner.go:195] Run: containerd --version
	I0919 22:47:35.880560  117334 ssh_runner.go:195] Run: containerd --version
	I0919 22:47:35.911964  117334 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 22:47:35.913181  117334 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:47:35.914141  117334 cli_runner.go:164] Run: docker network inspect ha-326307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:47:35.936843  117334 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:47:35.942788  117334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:47:35.959831  117334 mustload.go:65] Loading cluster: ha-326307
	I0919 22:47:35.960361  117334 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:35.960725  117334 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:47:35.983548  117334 host.go:66] Checking if "ha-326307" exists ...
	I0919 22:47:35.983903  117334 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307 for IP: 192.168.49.3
	I0919 22:47:35.983923  117334 certs.go:194] generating shared ca certs ...
	I0919 22:47:35.983943  117334 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:47:35.984087  117334 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 22:47:35.984197  117334 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 22:47:35.984214  117334 certs.go:256] generating profile certs ...
	I0919 22:47:35.984321  117334 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key
	I0919 22:47:35.984407  117334 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key.3b537cd4
	I0919 22:47:35.984452  117334 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key
	I0919 22:47:35.984465  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:47:35.984481  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:47:35.984502  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:47:35.984517  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:47:35.984529  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:47:35.984558  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:47:35.984580  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:47:35.984596  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:47:35.984682  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 22:47:35.984741  117334 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 22:47:35.984763  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 22:47:35.984810  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:47:35.984855  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:47:35.984890  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 22:47:35.984952  117334 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 22:47:35.984998  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem -> /usr/share/ca-certificates/18210.pem
	I0919 22:47:35.985019  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> /usr/share/ca-certificates/182102.pem
	I0919 22:47:35.985040  117334 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:47:35.985116  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307
	I0919 22:47:36.013559  117334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32844 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/ha-326307/id_rsa Username:docker}
	I0919 22:47:36.108444  117334 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:47:36.114647  117334 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:47:36.134856  117334 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:47:36.139915  117334 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:47:36.157816  117334 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:47:36.162850  117334 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:47:36.182313  117334 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:47:36.186562  117334 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0919 22:47:36.205411  117334 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:47:36.210518  117334 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:47:36.228454  117334 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:47:36.233958  117334 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:47:36.250475  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:47:36.287936  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:47:36.318473  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:47:36.347137  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:47:36.377777  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 22:47:36.409829  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:47:36.443400  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:47:36.478291  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:47:36.515327  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 22:47:36.554063  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 22:47:36.590241  117334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:47:36.621753  117334 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:47:36.642425  117334 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:47:36.664996  117334 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:47:36.686785  117334 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0919 22:47:36.707098  117334 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:47:36.728558  117334 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:47:36.749312  117334 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:47:36.770481  117334 ssh_runner.go:195] Run: openssl version
	I0919 22:47:36.777442  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 22:47:36.789581  117334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 22:47:36.793657  117334 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 22:47:36.793719  117334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 22:47:36.801340  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:47:36.812179  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:47:36.824436  117334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:47:36.828381  117334 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:47:36.828455  117334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:47:36.835691  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:47:36.845784  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 22:47:36.856763  117334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 22:47:36.860912  117334 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 22:47:36.860989  117334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 22:47:36.869849  117334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 22:47:36.880884  117334 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:47:36.885178  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:47:36.892956  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:47:36.900721  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:47:36.908539  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:47:36.916037  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:47:36.924529  117334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:47:36.933540  117334 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 containerd true true} ...
	I0919 22:47:36.933663  117334 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326307-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-326307 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:47:36.933702  117334 kube-vip.go:115] generating kube-vip config ...
	I0919 22:47:36.933754  117334 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:47:36.947749  117334 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:47:36.947810  117334 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:47:36.947865  117334 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:47:36.958049  117334 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:47:36.958122  117334 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:47:36.969748  117334 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 22:47:36.994919  117334 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:47:37.018759  117334 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:47:37.040341  117334 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:47:37.044968  117334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:47:37.058562  117334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:47:37.191250  117334 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:47:37.204699  117334 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 22:47:37.204946  117334 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:37.207697  117334 out.go:179] * Verifying Kubernetes components...
	I0919 22:47:37.208832  117334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:47:37.335597  117334 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:47:37.349256  117334 kapi.go:59] client config for ha-326307: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:47:37.349320  117334 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:47:37.349539  117334 node_ready.go:35] waiting up to 6m0s for node "ha-326307-m02" to be "Ready" ...
	I0919 22:47:37.358002  117334 node_ready.go:49] node "ha-326307-m02" is "Ready"
	I0919 22:47:37.358037  117334 node_ready.go:38] duration metric: took 8.469761ms for node "ha-326307-m02" to be "Ready" ...
	I0919 22:47:37.358053  117334 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:47:37.358113  117334 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:47:37.371449  117334 api_server.go:72] duration metric: took 166.706719ms to wait for apiserver process to appear ...
	I0919 22:47:37.371495  117334 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:47:37.371518  117334 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:47:37.381373  117334 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:47:37.382308  117334 api_server.go:141] control plane version: v1.34.0
	I0919 22:47:37.382336  117334 api_server.go:131] duration metric: took 10.833174ms to wait for apiserver health ...
	I0919 22:47:37.382347  117334 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:47:37.388868  117334 system_pods.go:59] 24 kube-system pods found
	I0919 22:47:37.388907  117334 system_pods.go:61] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:47:37.388914  117334 system_pods.go:61] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:47:37.388923  117334 system_pods.go:61] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:47:37.388931  117334 system_pods.go:61] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:47:37.388934  117334 system_pods.go:61] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running
	I0919 22:47:37.388938  117334 system_pods.go:61] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:47:37.388941  117334 system_pods.go:61] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:47:37.388944  117334 system_pods.go:61] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:47:37.388948  117334 system_pods.go:61] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:47:37.388955  117334 system_pods.go:61] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:47:37.388962  117334 system_pods.go:61] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running
	I0919 22:47:37.388968  117334 system_pods.go:61] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:47:37.388978  117334 system_pods.go:61] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:47:37.388981  117334 system_pods.go:61] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running
	I0919 22:47:37.388984  117334 system_pods.go:61] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:47:37.388987  117334 system_pods.go:61] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:47:37.388991  117334 system_pods.go:61] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:47:37.388994  117334 system_pods.go:61] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:47:37.388998  117334 system_pods.go:61] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:47:37.389001  117334 system_pods.go:61] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running
	I0919 22:47:37.389004  117334 system_pods.go:61] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:47:37.389006  117334 system_pods.go:61] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:47:37.389008  117334 system_pods.go:61] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:47:37.389011  117334 system_pods.go:61] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:47:37.389016  117334 system_pods.go:74] duration metric: took 6.663946ms to wait for pod list to return data ...
	I0919 22:47:37.389022  117334 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:47:37.392401  117334 default_sa.go:45] found service account: "default"
	I0919 22:47:37.392424  117334 default_sa.go:55] duration metric: took 3.397243ms for default service account to be created ...
	I0919 22:47:37.392433  117334 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:47:37.399599  117334 system_pods.go:86] 24 kube-system pods found
	I0919 22:47:37.399633  117334 system_pods.go:89] "coredns-66bc5c9577-9j5pw" [7d073e38-b63e-494d-bda0-3dde372a950b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:47:37.399642  117334 system_pods.go:89] "coredns-66bc5c9577-wqvzd" [64376c4d-1b82-490d-887d-7f628b134014] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:47:37.399653  117334 system_pods.go:89] "etcd-ha-326307" [cc755641-9756-42fe-94ea-76d3167a2f67] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:47:37.399658  117334 system_pods.go:89] "etcd-ha-326307-m02" [fe655813-ee01-420d-a127-9e43d85b3674] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:47:37.399662  117334 system_pods.go:89] "etcd-ha-326307-m03" [2264c92a-675d-4d92-b0c7-640bfa6eab93] Running
	I0919 22:47:37.399666  117334 system_pods.go:89] "kindnet-dmxl8" [ba4fd407-2e93-4324-ab2d-4f192d79fdf5] Running
	I0919 22:47:37.399669  117334 system_pods.go:89] "kindnet-gxnzs" [4fa827fc-0ba7-49b7-a225-e36d76241d92] Running
	I0919 22:47:37.399672  117334 system_pods.go:89] "kindnet-mk6pv" [71a20992-8279-4040-9edc-bedef6e7b570] Running
	I0919 22:47:37.399677  117334 system_pods.go:89] "kube-apiserver-ha-326307" [48020293-8f00-4ab7-8361-d21025061653] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:47:37.399683  117334 system_pods.go:89] "kube-apiserver-ha-326307-m02" [568fe413-bf13-4b89-867f-a74dacede73f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:47:37.399687  117334 system_pods.go:89] "kube-apiserver-ha-326307-m03" [a068235a-a6f2-4e72-a4ab-b61d248187d3] Running
	I0919 22:47:37.399694  117334 system_pods.go:89] "kube-controller-manager-ha-326307" [a62d94c7-7f48-4b34-9985-58de1d7d32bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:47:37.399699  117334 system_pods.go:89] "kube-controller-manager-ha-326307-m02" [0930e36a-1e9b-4f15-ac20-4fb1696fa911] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:47:37.399705  117334 system_pods.go:89] "kube-controller-manager-ha-326307-m03" [b1dba457-e157-4c9e-ba28-c2c383eb13d8] Running
	I0919 22:47:37.399711  117334 system_pods.go:89] "kube-proxy-8kxtv" [70be5fcc-7ab6-4eb1-870d-988fee1a01bb] Running
	I0919 22:47:37.399716  117334 system_pods.go:89] "kube-proxy-q8mtj" [6e3896c8-f771-462e-888d-942ebc96a7c2] Running
	I0919 22:47:37.399721  117334 system_pods.go:89] "kube-proxy-ws89d" [db26755e-db93-40a7-9f1a-f52205a1df48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:47:37.399725  117334 system_pods.go:89] "kube-scheduler-ha-326307" [da6af764-e4e6-48aa-9569-577e4379692f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:47:37.399731  117334 system_pods.go:89] "kube-scheduler-ha-326307-m02" [f6878d24-de85-4cf9-a49f-7ff55bf06519] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:47:37.399735  117334 system_pods.go:89] "kube-scheduler-ha-326307-m03" [2d92661f-37cb-443e-b082-3960536ed3a8] Running
	I0919 22:47:37.399738  117334 system_pods.go:89] "kube-vip-ha-326307" [4096d466-04a3-43fa-9471-3e52b65426bb] Running
	I0919 22:47:37.399742  117334 system_pods.go:89] "kube-vip-ha-326307-m02" [24b5d637-78d1-41f7-8e00-40fee7f9e60f] Running
	I0919 22:47:37.399746  117334 system_pods.go:89] "kube-vip-ha-326307-m03" [c9b028c5-322e-49e8-8195-c7a478179f74] Running
	I0919 22:47:37.399749  117334 system_pods.go:89] "storage-provisioner" [cafe04c6-2dce-4b93-b6d1-205efc39b360] Running
	I0919 22:47:37.399759  117334 system_pods.go:126] duration metric: took 7.320503ms to wait for k8s-apps to be running ...
	I0919 22:47:37.399765  117334 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:47:37.399808  117334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:47:37.412914  117334 system_svc.go:56] duration metric: took 13.132784ms WaitForService to wait for kubelet
	I0919 22:47:37.412941  117334 kubeadm.go:578] duration metric: took 208.206141ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:47:37.412955  117334 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:47:37.416336  117334 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:47:37.416362  117334 node_conditions.go:123] node cpu capacity is 8
	I0919 22:47:37.416374  117334 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:47:37.416378  117334 node_conditions.go:123] node cpu capacity is 8
	I0919 22:47:37.416382  117334 node_conditions.go:105] duration metric: took 3.422712ms to run NodePressure ...
	I0919 22:47:37.416393  117334 start.go:241] waiting for startup goroutines ...
	I0919 22:47:37.416414  117334 start.go:255] writing updated cluster config ...
	I0919 22:47:37.418704  117334 out.go:203] 
	I0919 22:47:37.420426  117334 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:37.420560  117334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:47:37.422628  117334 out.go:179] * Starting "ha-326307-m04" worker node in "ha-326307" cluster
	I0919 22:47:37.424537  117334 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:47:37.426046  117334 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:47:37.427281  117334 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:47:37.427308  117334 cache.go:58] Caching tarball of preloaded images
	I0919 22:47:37.427343  117334 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:47:37.427431  117334 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:47:37.427448  117334 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 22:47:37.427555  117334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:47:37.449457  117334 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:47:37.449492  117334 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:47:37.449508  117334 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:47:37.449543  117334 start.go:360] acquireMachinesLock for ha-326307-m04: {Name:mk65e4f546dae46b7c7a6cfe6f590e09a0a01676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:47:37.449601  117334 start.go:364] duration metric: took 44.457µs to acquireMachinesLock for "ha-326307-m04"
	I0919 22:47:37.449624  117334 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:47:37.449630  117334 fix.go:54] fixHost starting: m04
	I0919 22:47:37.449822  117334 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:47:37.469296  117334 fix.go:112] recreateIfNeeded on ha-326307-m04: state=Stopped err=<nil>
	W0919 22:47:37.469328  117334 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:47:37.472893  117334 out.go:252] * Restarting existing docker container for "ha-326307-m04" ...
	I0919 22:47:37.473037  117334 cli_runner.go:164] Run: docker start ha-326307-m04
	I0919 22:47:37.730215  117334 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:47:37.769395  117334 kic.go:430] container "ha-326307-m04" state is running.
	I0919 22:47:37.769860  117334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-326307-m04
	I0919 22:47:37.802691  117334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/ha-326307/config.json ...
	I0919 22:47:37.803232  117334 machine.go:93] provisionDockerMachine start ...
	I0919 22:47:37.803452  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	I0919 22:47:37.830966  117334 main.go:141] libmachine: Using SSH client type: native
	I0919 22:47:37.831267  117334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32854 <nil> <nil>}
	I0919 22:47:37.831280  117334 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:47:37.832368  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40452->127.0.0.1:32854: read: connection reset by peer
	I0919 22:47:40.870002  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:43.907381  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:46.957770  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:49.994121  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:53.032142  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:56.070745  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:59.108416  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:02.147842  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:05.186639  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:08.223489  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:11.260279  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:14.297886  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:17.336139  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:20.372068  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:23.408593  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:26.447629  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:29.485125  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:32.522879  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:35.561474  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:38.597754  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:41.635956  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:44.673554  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:47.712342  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:50.749576  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:53.787102  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:56.825425  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:59.862260  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:02.899291  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:05.938332  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:08.975744  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:12.015641  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:15.054493  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:18.091218  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:21.132315  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:24.170051  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:27.208961  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:30.248209  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:33.285497  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:36.323122  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:39.360791  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:42.398655  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:45.436612  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:48.473310  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:51.510574  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:54.549231  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:57.586924  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:00.625036  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:03.663968  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:06.702355  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:09.739425  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:12.775624  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:15.814726  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:18.852079  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:21.891087  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:24.931250  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:27.968596  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:31.006284  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:34.044202  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:37.083109  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:40.084266  117334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:50:40.084324  117334 ubuntu.go:182] provisioning hostname "ha-326307-m04"
	I0919 22:50:40.084394  117334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-326307-m04
	I0919 22:50:40.104999  117334 main.go:141] libmachine: Using SSH client type: native
	I0919 22:50:40.105261  117334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32854 <nil> <nil>}
	I0919 22:50:40.105277  117334 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326307-m04 && echo "ha-326307-m04" | sudo tee /etc/hostname
	I0919 22:50:40.142494  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:43.179723  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:46.219078  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:49.257468  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:52.294725  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:55.333453  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:58.369775  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:01.410204  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:04.447539  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:07.484969  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:10.522879  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:13.560883  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:16.598958  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:19.636636  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:22.675329  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:25.715661  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:28.752029  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:31.789804  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:34.827282  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:37.864801  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:40.902097  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:43.938239  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:46.977234  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:50.013273  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:53.050179  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:56.088669  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:59.125751  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:02.164113  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:05.202127  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:08.238804  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:11.276877  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:14.314627  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:17.352864  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:20.390447  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:23.426725  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:26.464453  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:29.501151  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:32.537422  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:35.576049  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:38.612605  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:41.651300  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:44.689779  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:47.727687  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:50.765824  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:53.803525  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:56.842262  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:59.879359  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:02.917098  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:05.954879  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:08.991550  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:12.029919  117334 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	60851880beba1       6e38f40d628db       4 minutes ago       Running             storage-provisioner       4                   19b5df6929bcc       storage-provisioner
	810a6cd144238       409467f978b4a       5 minutes ago       Running             kindnet-cni               2                   10d4d4d21603a       kindnet-gxnzs
	ce5836794f70b       52546a367cc9e       5 minutes ago       Running             coredns                   2                   abb93bb86926d       coredns-66bc5c9577-9j5pw
	c0c71c61914ea       8c811b4aec35f       5 minutes ago       Running             busybox                   2                   9c53bd5c66cb6       busybox-7b57f96db7-m8swj
	a2201f970a903       52546a367cc9e       5 minutes ago       Running             coredns                   2                   b1f6c2cce9b7b       coredns-66bc5c9577-wqvzd
	dea19b7270918       6e38f40d628db       5 minutes ago       Exited              storage-provisioner       3                   19b5df6929bcc       storage-provisioner
	635f0da2d945c       df0860106674d       5 minutes ago       Running             kube-proxy                2                   990778e2be7c0       kube-proxy-8kxtv
	d2382f2921e1e       765655ea60781       5 minutes ago       Running             kube-vip                  2                   8b072a1ef0aef       kube-vip-ha-326307
	3a0f88ac96e63       a0af72f2ec6d6       5 minutes ago       Running             kube-controller-manager   2                   1b70ed4ca2d4c       kube-controller-manager-ha-326307
	1373762f48215       46169d968e920       5 minutes ago       Running             kube-scheduler            2                   f294a22440ffc       kube-scheduler-ha-326307
	d64ccf0dc7b40       5f1f5298c888d       5 minutes ago       Running             etcd                      2                   a02766bee2120       etcd-ha-326307
	2368bad9e0ff4       90550c43ad2bc       5 minutes ago       Running             kube-apiserver            2                   b8b1b6232caad       kube-apiserver-ha-326307
	b1e652a991900       765655ea60781       6 minutes ago       Exited              kube-vip                  1                   8124d18d08f1c       kube-vip-ha-326307
	fea1c0534d95d       409467f978b4a       12 minutes ago      Exited              kindnet-cni               1                   c6c63e662186b       kindnet-gxnzs
	fff949799c16f       52546a367cc9e       12 minutes ago      Exited              coredns                   1                   d66fcc49f8eef       coredns-66bc5c9577-wqvzd
	9b01ee2966e08       52546a367cc9e       12 minutes ago      Exited              coredns                   1                   8915a954c3a5e       coredns-66bc5c9577-9j5pw
	471e8ec48d678       8c811b4aec35f       12 minutes ago      Exited              busybox                   1                   4242a65c0c92e       busybox-7b57f96db7-m8swj
	c1e4cc3b9a7f1       df0860106674d       12 minutes ago      Exited              kube-proxy                1                   bb87d6f8210e1       kube-proxy-8kxtv
	63dc43f0224fa       46169d968e920       13 minutes ago      Exited              kube-scheduler            1                   b84e223a297e4       kube-scheduler-ha-326307
	7a855457ed99a       a0af72f2ec6d6       13 minutes ago      Exited              kube-controller-manager   1                   35b9028490f76       kube-controller-manager-ha-326307
	c543ffd76b85c       5f1f5298c888d       13 minutes ago      Exited              etcd                      1                   a85600718119d       etcd-ha-326307
	e1a181d28b52f       90550c43ad2bc       13 minutes ago      Exited              kube-apiserver            1                   4ff7be1cea576       kube-apiserver-ha-326307
	
	
	==> containerd <==
	Sep 19 22:47:35 ha-326307 containerd[478]: time="2025-09-19T22:47:35.234308150Z" level=info msg="StartContainer for \"c0c71c61914ea5670cf441fb3a7e58b4705b12ff8e0a2a06ac54ed3cff104fec\" returns successfully"
	Sep 19 22:47:35 ha-326307 containerd[478]: time="2025-09-19T22:47:35.238794118Z" level=info msg="StartContainer for \"ce5836794f70bc6820af35a854d22c718d7297427a16153456428d555d028498\" returns successfully"
	Sep 19 22:47:35 ha-326307 containerd[478]: time="2025-09-19T22:47:35.308722928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-gxnzs,Uid:4fa827fc-0ba7-49b7-a225-e36d76241d92,Namespace:kube-system,Attempt:2,} returns sandbox id \"10d4d4d21603ab15b79bc6fbc40f90fcf3501c0a2aace8931ec66a8a2c657d31\""
	Sep 19 22:47:35 ha-326307 containerd[478]: time="2025-09-19T22:47:35.314143699Z" level=info msg="CreateContainer within sandbox \"10d4d4d21603ab15b79bc6fbc40f90fcf3501c0a2aace8931ec66a8a2c657d31\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Sep 19 22:47:35 ha-326307 containerd[478]: time="2025-09-19T22:47:35.328893607Z" level=info msg="CreateContainer within sandbox \"10d4d4d21603ab15b79bc6fbc40f90fcf3501c0a2aace8931ec66a8a2c657d31\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"810a6cd144238ba44beb37fd79b425463e131547cc59d8e50af5f62d574f2a0c\""
	Sep 19 22:47:35 ha-326307 containerd[478]: time="2025-09-19T22:47:35.329509765Z" level=info msg="StartContainer for \"810a6cd144238ba44beb37fd79b425463e131547cc59d8e50af5f62d574f2a0c\""
	Sep 19 22:47:35 ha-326307 containerd[478]: time="2025-09-19T22:47:35.508339663Z" level=info msg="StartContainer for \"810a6cd144238ba44beb37fd79b425463e131547cc59d8e50af5f62d574f2a0c\" returns successfully"
	Sep 19 22:48:05 ha-326307 containerd[478]: time="2025-09-19T22:48:05.175910087Z" level=info msg="received exit event container_id:\"dea19b72709181d4a859a25f85247f5072fef30f2d6610cfa153436eb7b2884b\"  id:\"dea19b72709181d4a859a25f85247f5072fef30f2d6610cfa153436eb7b2884b\"  pid:1801  exit_status:1  exited_at:{seconds:1758322085  nanos:175369625}"
	Sep 19 22:48:05 ha-326307 containerd[478]: time="2025-09-19T22:48:05.201458709Z" level=info msg="shim disconnected" id=dea19b72709181d4a859a25f85247f5072fef30f2d6610cfa153436eb7b2884b namespace=k8s.io
	Sep 19 22:48:05 ha-326307 containerd[478]: time="2025-09-19T22:48:05.201493469Z" level=warning msg="cleaning up after shim disconnected" id=dea19b72709181d4a859a25f85247f5072fef30f2d6610cfa153436eb7b2884b namespace=k8s.io
	Sep 19 22:48:05 ha-326307 containerd[478]: time="2025-09-19T22:48:05.201500921Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 19 22:48:05 ha-326307 containerd[478]: time="2025-09-19T22:48:05.763126880Z" level=info msg="RemoveContainer for \"bb7d0d80b9c2303a244cfffc060085bd15528b57546277cdaaaf2dd707b68f1f\""
	Sep 19 22:48:05 ha-326307 containerd[478]: time="2025-09-19T22:48:05.769587459Z" level=info msg="RemoveContainer for \"bb7d0d80b9c2303a244cfffc060085bd15528b57546277cdaaaf2dd707b68f1f\" returns successfully"
	Sep 19 22:48:19 ha-326307 containerd[478]: time="2025-09-19T22:48:19.556042637Z" level=info msg="CreateContainer within sandbox \"19b5df6929bcca0690cc4136c5e26131ce64312a882606ffd8cc1d071d82ffb9\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:4,}"
	Sep 19 22:48:19 ha-326307 containerd[478]: time="2025-09-19T22:48:19.567603647Z" level=info msg="CreateContainer within sandbox \"19b5df6929bcca0690cc4136c5e26131ce64312a882606ffd8cc1d071d82ffb9\" for &ContainerMetadata{Name:storage-provisioner,Attempt:4,} returns container id \"60851880beba128de4ce9b593cf8209f2829b865f85927324eb4d9fd759bbfe9\""
	Sep 19 22:48:19 ha-326307 containerd[478]: time="2025-09-19T22:48:19.568202287Z" level=info msg="StartContainer for \"60851880beba128de4ce9b593cf8209f2829b865f85927324eb4d9fd759bbfe9\""
	Sep 19 22:48:19 ha-326307 containerd[478]: time="2025-09-19T22:48:19.625774944Z" level=info msg="StartContainer for \"60851880beba128de4ce9b593cf8209f2829b865f85927324eb4d9fd759bbfe9\" returns successfully"
	Sep 19 22:48:28 ha-326307 containerd[478]: time="2025-09-19T22:48:28.698901083Z" level=info msg="StopPodSandbox for \"a66e01a46573183df8e2c6c041cfc03b30ce85291f726f529b293f52bca48ac9\""
	Sep 19 22:48:28 ha-326307 containerd[478]: time="2025-09-19T22:48:28.699035772Z" level=info msg="TearDown network for sandbox \"a66e01a46573183df8e2c6c041cfc03b30ce85291f726f529b293f52bca48ac9\" successfully"
	Sep 19 22:48:28 ha-326307 containerd[478]: time="2025-09-19T22:48:28.699055579Z" level=info msg="StopPodSandbox for \"a66e01a46573183df8e2c6c041cfc03b30ce85291f726f529b293f52bca48ac9\" returns successfully"
	Sep 19 22:48:28 ha-326307 containerd[478]: time="2025-09-19T22:48:28.699501185Z" level=info msg="RemovePodSandbox for \"a66e01a46573183df8e2c6c041cfc03b30ce85291f726f529b293f52bca48ac9\""
	Sep 19 22:48:28 ha-326307 containerd[478]: time="2025-09-19T22:48:28.699545157Z" level=info msg="Forcibly stopping sandbox \"a66e01a46573183df8e2c6c041cfc03b30ce85291f726f529b293f52bca48ac9\""
	Sep 19 22:48:28 ha-326307 containerd[478]: time="2025-09-19T22:48:28.699622054Z" level=info msg="TearDown network for sandbox \"a66e01a46573183df8e2c6c041cfc03b30ce85291f726f529b293f52bca48ac9\" successfully"
	Sep 19 22:48:28 ha-326307 containerd[478]: time="2025-09-19T22:48:28.704572896Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a66e01a46573183df8e2c6c041cfc03b30ce85291f726f529b293f52bca48ac9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 19 22:48:28 ha-326307 containerd[478]: time="2025-09-19T22:48:28.704698929Z" level=info msg="RemovePodSandbox \"a66e01a46573183df8e2c6c041cfc03b30ce85291f726f529b293f52bca48ac9\" returns successfully"
	
	
	==> coredns [9b01ee2966e081085b732d62e68985fd9249574188499e7e99fa53ff3e585c2d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35530 - 6163 "HINFO IN 6373030861249236477.4474115650148028833. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02205233s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a2201f970a903dd1a29d0391142e3d00ef21da0884a2a28400e81321190e6e16] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41660 - 49533 "HINFO IN 7022188611791911840.6626161006354943945. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015497281s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [ce5836794f70bc6820af35a854d22c718d7297427a16153456428d555d028498] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56320 - 12627 "HINFO IN 3016744115900145335.4346117531830661805. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022814313s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [fff949799c16ffb392a665b0e5af2f326948a468e2495b8ea2fa176e06b5cfbf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60701 - 36326 "HINFO IN 1706815658337671432.2830354807318160675. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06080012s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-326307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_23_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:23:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:53:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:47:34 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:47:34 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:47:34 +0000   Fri, 19 Sep 2025 22:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:47:34 +0000   Fri, 19 Sep 2025 22:23:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-326307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 4287e5008d4f4feeae7fa1f3a559c994
	  System UUID:                9c3f30ed-68b2-4a1c-af95-9031ae210a78
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-m8swj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 coredns-66bc5c9577-9j5pw             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29m
	  kube-system                 coredns-66bc5c9577-wqvzd             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29m
	  kube-system                 etcd-ha-326307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29m
	  kube-system                 kindnet-gxnzs                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m
	  kube-system                 kube-apiserver-ha-326307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-ha-326307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-8kxtv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-ha-326307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-vip-ha-326307                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m39s                  kube-proxy       
	  Normal  Starting                 29m                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  29m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)      kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)      kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)      kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     29m                    kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m                    kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                    kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           29m                    node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           29m                    node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           28m                    node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           12m                    node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  Starting                 5m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m46s (x8 over 5m46s)  kubelet          Node ha-326307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m46s (x8 over 5m46s)  kubelet          Node ha-326307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m46s (x7 over 5m46s)  kubelet          Node ha-326307 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-326307 event: Registered Node ha-326307 in Controller
	
	
	Name:               ha-326307-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326307-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-326307
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326307-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:53:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:47:34 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:47:34 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:47:34 +0000   Fri, 19 Sep 2025 22:24:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:47:34 +0000   Fri, 19 Sep 2025 22:24:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-326307-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 4960329a90f04c119b15c55495326caa
	  System UUID:                8095cd89-f43b-4d8a-adef-b40d6aaa7ad2
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tfpvf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 etcd-ha-326307-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29m
	  kube-system                 kindnet-mk6pv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m
	  kube-system                 kube-apiserver-ha-326307-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-ha-326307-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-q8mtj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-ha-326307-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-vip-ha-326307-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 29m                    kube-proxy       
	  Normal  RegisteredNode           29m                    node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           29m                    node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           28m                    node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-326307-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-326307-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-326307-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           14m                    node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-326307-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-326307-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-326307-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           12m                    node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  Starting                 5m44s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m43s (x8 over 5m44s)  kubelet          Node ha-326307-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m43s (x8 over 5m44s)  kubelet          Node ha-326307-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m43s (x7 over 5m44s)  kubelet          Node ha-326307-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-326307-m02 event: Registered Node ha-326307-m02 in Controller
	
	
	==> dmesg <==
	[Sep19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001853] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001006] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.090013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.467231] i8042: Warning: Keylock active
	[  +0.009747] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004210] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001075] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000901] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000908] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000709] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001042] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001465] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.534799] block sda: the capability attribute has been deprecated.
	[  +0.099617] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027269] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.089616] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [c543ffd76b85cd616fdc10e6d4948f0d679d10d322656654711f3e654ec0cea6] <==
	{"level":"warn","ts":"2025-09-19T22:47:09.472144Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:47:07.469928Z","time spent":"2.001985778s","remote":"127.0.0.1:48290","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2025/09/19 22:47:09 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-09-19T22:47:09.544554Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082853268560,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-09-19T22:47:09.816762Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:47:02.812430Z","time spent":"7.004302729s","remote":"127.0.0.1:48482","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	{"level":"warn","ts":"2025-09-19T22:47:10.002650Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"8.003556753s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-09-19T22:47:10.002720Z","caller":"traceutil/trace.go:172","msg":"trace[181523167] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"8.003648375s","start":"2025-09-19T22:47:01.999057Z","end":"2025-09-19T22:47:10.002706Z","steps":["trace[181523167] 'agreement among raft nodes before linearized reading'  (duration: 8.003554081s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:47:10.002779Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:47:01.999040Z","time spent":"8.003724494s","remote":"127.0.0.1:48826","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	{"level":"warn","ts":"2025-09-19T22:47:10.048458Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040082853268560,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-09-19T22:47:10.453021Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-19T22:47:10.453108Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"ha-326307","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-19T22:47:10.459205Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-09-19T22:47:10.463581Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"4.313480359s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" limit:1 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-09-19T22:47:10.463670Z","caller":"traceutil/trace.go:172","msg":"trace[1835245076] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; }","duration":"4.31358728s","start":"2025-09-19T22:47:06.150069Z","end":"2025-09-19T22:47:10.463657Z","steps":["trace[1835245076] 'agreement among raft nodes before linearized reading'  (duration: 4.31347851s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:47:10.463735Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:47:06.150055Z","time spent":"4.313652679s","remote":"127.0.0.1:49184","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":0,"response size":0,"request content":"key:\"/registry/leases/kube-system/plndr-cp-lock\" limit:1 "}
	2025/09/19 22:47:10 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-09-19T22:47:10.481119Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"10.079532981s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-09-19T22:47:10.481205Z","caller":"traceutil/trace.go:172","msg":"trace[1123974654] range","detail":"{range_begin:; range_end:; }","duration":"10.079644256s","start":"2025-09-19T22:47:00.401545Z","end":"2025-09-19T22:47:10.481189Z","steps":["trace[1123974654] 'agreement among raft nodes before linearized reading'  (duration: 10.079528901s)"],"step_count":1}
	{"level":"error","ts":"2025-09-19T22:47:10.481320Z","caller":"etcdhttp/health.go:345","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: context canceled\n[+]non_learner ok\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHTTPEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:345\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2220\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2747\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3210\nnet/http.(*conn).serve\n\tnet/http/server.go:2092"}
	{"level":"warn","ts":"2025-09-19T22:47:10.482383Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"4.517939408s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-vip-ha-326307\" limit:1 ","response":"","error":"context canceled"}
	{"level":"warn","ts":"2025-09-19T22:47:10.482581Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:47:07.698286Z","time spent":"2.784289152s","remote":"127.0.0.1:49184","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2025/09/19 22:47:10 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-09-19T22:47:10.482772Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:47:09.845794Z","time spent":"636.97514ms","remote":"127.0.0.1:48482","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2025/09/19 22:47:10 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2025-09-19T22:47:10.482967Z","caller":"traceutil/trace.go:172","msg":"trace[443762342] range","detail":"{range_begin:/registry/pods/kube-system/kube-vip-ha-326307; range_end:; }","duration":"4.518536655s","start":"2025-09-19T22:47:05.964415Z","end":"2025-09-19T22:47:10.482952Z","steps":["trace[443762342] 'agreement among raft nodes before linearized reading'  (duration: 4.517938129s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:47:10.483203Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:47:05.964387Z","time spent":"4.5187934s","remote":"127.0.0.1:48902","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/pods/kube-system/kube-vip-ha-326307\" limit:1 "}
	
	
	==> etcd [d64ccf0dc7b40cde117eacc30ca9bbbf7a6b3a0ee53c1d14cce1120da436c90c] <==
	{"level":"warn","ts":"2025-09-19T22:47:33.609596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.617771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.628550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.637404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.645727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.653769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.661698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.679028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.686024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.694479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.702321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.711077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.719653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.728486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.738961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.754845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.766884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.772330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.781340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.789872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.803430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.807473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.815662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.825214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:47:33.884731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35708","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:53:14 up  1:35,  0 users,  load average: 0.22, 0.75, 0.93
	Linux ha-326307 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [810a6cd144238ba44beb37fd79b425463e131547cc59d8e50af5f62d574f2a0c] <==
	I0919 22:52:06.092207       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:52:16.091963       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:52:16.092011       1 main.go:301] handling current node
	I0919 22:52:16.092033       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:52:16.092042       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:52:26.091594       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:52:26.091628       1 main.go:301] handling current node
	I0919 22:52:26.091648       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:52:26.091654       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:52:36.092096       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:52:36.092127       1 main.go:301] handling current node
	I0919 22:52:36.092142       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:52:36.092146       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:52:46.091841       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:52:46.091880       1 main.go:301] handling current node
	I0919 22:52:46.091896       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:52:46.091900       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:52:56.091491       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:52:56.091522       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:52:56.091754       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:52:56.091769       1 main.go:301] handling current node
	I0919 22:53:06.091508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:53:06.091541       1 main.go:301] handling current node
	I0919 22:53:06.091556       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:53:06.091561       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [fea1c0534d95d8681a40f476ef920c8ced5eb8897a63d871e66830a2e35509fc] <==
	I0919 22:46:21.328073       1 main.go:301] handling current node
	I0919 22:46:21.328087       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:46:21.328093       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:46:21.328336       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:46:21.328349       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:46:31.327485       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:46:31.327520       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:46:31.327776       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:46:31.327794       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:46:31.327897       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:46:31.327908       1 main.go:301] handling current node
	I0919 22:46:41.328117       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:46:41.328176       1 main.go:324] Node ha-326307-m03 has CIDR [10.244.2.0/24] 
	I0919 22:46:41.328398       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:46:41.328415       1 main.go:301] handling current node
	I0919 22:46:41.328447       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:46:41.328457       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:46:51.327464       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:46:51.327528       1 main.go:301] handling current node
	I0919 22:46:51.327543       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:46:51.327548       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	I0919 22:47:01.328382       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:47:01.328423       1 main.go:301] handling current node
	I0919 22:47:01.328462       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:47:01.328471       1 main.go:324] Node ha-326307-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [2368bad9e0ff41be487b38fe2174f4d5d39df2d577f9274cb01bd1725b563fbb] <==
	I0919 22:47:34.484736       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	W0919 22:47:34.488289       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3 192.168.49.4]
	I0919 22:47:34.511334       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:47:34.520260       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0919 22:47:34.529745       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0919 22:47:34.529772       1 policy_source.go:240] refreshing policies
	I0919 22:47:34.534324       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 22:47:34.590694       1 controller.go:667] quota admission added evaluator for: endpoints
	I0919 22:47:34.609636       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0919 22:47:34.613587       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0919 22:47:34.726626       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0919 22:47:35.371498       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 22:47:35.823852       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I0919 22:47:37.832844       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:47:38.032959       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 22:47:38.230260       1 controller.go:667] quota admission added evaluator for: deployments.apps
	W0919 22:47:55.824559       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0919 22:48:48.139443       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:49:01.095898       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:50:12.534127       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:50:28.439785       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:51:18.351731       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:51:51.119439       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:52:31.280720       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:53:12.594911       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [e1a181d28b52f38f5b949594a58e659b210dce7e9337c1df57be92df2a87ece5] <==
	E0919 22:47:10.465626       1 watcher.go:335] watch chan error: etcdserver: no leader
	W0919 22:47:10.465627       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0919 22:47:10.465747       1 watcher.go:335] watch chan error: etcdserver: no leader
	W0919 22:47:10.466105       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0919 22:47:10.466258       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:47:10.466288       1 watcher.go:335] watch chan error: etcdserver: no leader
	W0919 22:47:10.466370       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0919 22:47:10.466724       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:47:10.466745       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:47:10.466753       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:47:10.466769       1 watcher.go:335] watch chan error: etcdserver: no leader
	W0919 22:47:10.466813       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:47:10.467215       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:47:10.467350       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0919 22:47:10.468428       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:47:10.468667       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:47:10.468725       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:47:10.468748       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:47:10.468772       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:47:10.468845       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:47:10.468868       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:47:10.469009       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:47:10.469035       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:47:10.469099       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:47:10.469129       1 watcher.go:335] watch chan error: etcdserver: no leader
	
	
	==> kube-controller-manager [3a0f88ac96e637fd1a25a3a9d355ee87788a1c8050a80984c4abad203a1e14cd] <==
	I0919 22:47:38.281145       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e75e0609-6186-44fd-8674-15383f700490", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-49s8d": the object has been modified; please apply your changes to the latest version and try again
	E0919 22:47:57.774532       1 gc_controller.go:151] "Failed to get node" err="node \"ha-326307-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326307-m03"
	E0919 22:47:57.774575       1 gc_controller.go:151] "Failed to get node" err="node \"ha-326307-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326307-m03"
	E0919 22:47:57.774585       1 gc_controller.go:151] "Failed to get node" err="node \"ha-326307-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326307-m03"
	E0919 22:47:57.774593       1 gc_controller.go:151] "Failed to get node" err="node \"ha-326307-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326307-m03"
	E0919 22:47:57.774598       1 gc_controller.go:151] "Failed to get node" err="node \"ha-326307-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326307-m03"
	E0919 22:48:17.775401       1 gc_controller.go:151] "Failed to get node" err="node \"ha-326307-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326307-m03"
	E0919 22:48:17.775452       1 gc_controller.go:151] "Failed to get node" err="node \"ha-326307-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326307-m03"
	E0919 22:48:17.775457       1 gc_controller.go:151] "Failed to get node" err="node \"ha-326307-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326307-m03"
	E0919 22:48:17.775465       1 gc_controller.go:151] "Failed to get node" err="node \"ha-326307-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326307-m03"
	E0919 22:48:17.775470       1 gc_controller.go:151] "Failed to get node" err="node \"ha-326307-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326307-m03"
	I0919 22:48:17.786946       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-ws89d"
	I0919 22:48:17.812750       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-ws89d"
	I0919 22:48:17.812793       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-326307-m03"
	I0919 22:48:17.840694       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-326307-m03"
	I0919 22:48:17.840738       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-326307-m03"
	I0919 22:48:17.864847       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-326307-m03"
	I0919 22:48:17.864881       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-dmxl8"
	E0919 22:48:17.882014       1 gc_controller.go:256] "Unhandled Error" err="pods \"kindnet-dmxl8\" not found" logger="UnhandledError"
	I0919 22:48:17.882055       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-326307-m03"
	I0919 22:48:17.905297       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-326307-m03"
	I0919 22:48:17.905333       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-326307-m03"
	E0919 22:48:17.911799       1 gc_controller.go:256] "Unhandled Error" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1dba457-e157-4c9e-ba28-c2c383eb13d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"},{\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-19T22:48:17Z\\\",\\\"message\\\":\\\"PodGC: node no longer exists\\\",\\\"observedGeneration\\\":2,\\\"reason\\\":\\\"DeletionByPodGC\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"observedGeneration\\\":2,\\\"phase\\\":\\\"Failed\\\"}}\" for pod \"kube-system\"/\"kube-controller-manager-ha-326307-m03\": pods \"kube-controller-manager-ha-326307-m03\" not found" logger="UnhandledError"
	I0919 22:48:17.911851       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-326307-m03"
	E0919 22:48:17.914943       1 gc_controller.go:256] "Unhandled Error" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d92661f-37cb-443e-b082-3960536ed3a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"},{\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-19T22:48:17Z\\\",\\\"message\\\":\\\"PodGC: node no longer exists\\\",\\\"observedGeneration\\\":2,\\\"reason\\\":\\\"DeletionByPodGC\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"observedGeneration\\\":2,\\\"phase\\\":\\\"Failed\\\"}}\" for pod \"kube-system\"/\"kube-scheduler-ha-326307-m03\": pods \"kube-scheduler-ha-326307-m03\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [7a855457ed99afe9cf4fc231f2be64af24c29ff79d1749522a97044d84f87b8c] <==
	I0919 22:40:22.627256       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 22:40:22.631207       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:40:22.638798       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:40:22.639864       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0919 22:40:22.639886       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:40:22.639904       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0919 22:40:22.640312       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 22:40:22.640328       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:40:22.640420       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 22:40:22.640606       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307"
	I0919 22:40:22.640638       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m03"
	I0919 22:40:22.640606       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326307-m02"
	I0919 22:40:22.640694       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 22:40:22.946089       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-49s8d\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:40:22.946224       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e75e0609-6186-44fd-8674-15383f700490", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-49s8d": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:40:56.500901       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-49s8d\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:40:56.501810       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e75e0609-6186-44fd-8674-15383f700490", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-49s8d": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:40:57.687491       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-49s8d\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:40:57.688223       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e75e0609-6186-44fd-8674-15383f700490", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-49s8d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-49s8d": the object has been modified; please apply your changes to the latest version and try again
	E0919 22:46:46.068479       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0919 22:47:02.599981       1 gc_controller.go:151] "Failed to get node" err="node \"ha-326307-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326307-m03"
	E0919 22:47:02.600036       1 gc_controller.go:151] "Failed to get node" err="node \"ha-326307-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326307-m03"
	E0919 22:47:02.600045       1 gc_controller.go:151] "Failed to get node" err="node \"ha-326307-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326307-m03"
	E0919 22:47:02.600052       1 gc_controller.go:151] "Failed to get node" err="node \"ha-326307-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326307-m03"
	E0919 22:47:02.600058       1 gc_controller.go:151] "Failed to get node" err="node \"ha-326307-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326307-m03"
	
	
	==> kube-proxy [635f0da2d945c63c6e3f2e95a0b39ccffbf865028bb53cdc1fd330abfe695fe0] <==
	I0919 22:47:35.204263       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:47:35.274627       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:47:35.376237       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:47:35.376284       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:47:35.376409       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:47:35.416946       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:47:35.417017       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:47:35.424139       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:47:35.424681       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:47:35.424853       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:47:35.427136       1 config.go:200] "Starting service config controller"
	I0919 22:47:35.427149       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:47:35.427172       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:47:35.427179       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:47:35.427148       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:47:35.427200       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:47:35.427266       1 config.go:309] "Starting node config controller"
	I0919 22:47:35.427272       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:47:35.427278       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:47:35.527336       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:47:35.527393       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:47:35.527430       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c1e4cc3b9a7f1259a1339b951fd30079b99dc7acedc895c7ae90814405daad16] <==
	I0919 22:40:20.575328       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:40:20.672061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:40:20.772951       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:40:20.773530       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:40:20.774779       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:40:20.837591       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:40:20.837664       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:40:20.853483       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:40:20.853910       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:40:20.853934       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:40:20.859319       1 config.go:309] "Starting node config controller"
	I0919 22:40:20.859436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:40:20.859447       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:40:20.859941       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:40:20.859974       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:40:20.860439       1 config.go:200] "Starting service config controller"
	I0919 22:40:20.860604       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:40:20.861833       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:40:20.862286       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:40:20.960109       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:40:20.960793       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:40:20.962617       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1373762f48215f505ded621a9917f9e9deda40b2815f0cbda59d8457cd2e760e] <==
	I0919 22:47:29.797748       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:47:34.420484       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 22:47:34.420523       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 22:47:34.420534       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:47:34.420543       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:47:34.459980       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:47:34.460279       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:47:34.463981       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:47:34.464136       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:47:34.470267       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:47:34.464665       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:47:34.571336       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [63dc43f0224fa1a9d7b840c89125ea37b4a73ef9ee8a12fcb8e3d4abfeac6284] <==
	I0919 22:40:14.121705       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:40:19.175600       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 22:40:19.175869       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 22:40:19.175952       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:40:19.175968       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:40:19.217556       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:40:19.217674       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:40:19.220816       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:40:19.221038       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:40:19.226224       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:40:19.226332       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:40:19.321477       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:47:10.454250       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0919 22:47:10.454604       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0919 22:47:10.454625       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0919 22:47:10.455029       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:47:10.455814       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0919 22:47:10.468265       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 19 22:47:34 ha-326307 kubelet[619]: E0919 22:47:34.566916     619 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-326307\" already exists" pod="kube-system/kube-controller-manager-ha-326307"
	Sep 19 22:47:34 ha-326307 kubelet[619]: I0919 22:47:34.566953     619 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-326307"
	Sep 19 22:47:34 ha-326307 kubelet[619]: E0919 22:47:34.577275     619 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-326307\" already exists" pod="kube-system/kube-scheduler-ha-326307"
	Sep 19 22:47:34 ha-326307 kubelet[619]: I0919 22:47:34.577316     619 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-326307"
	Sep 19 22:47:34 ha-326307 kubelet[619]: E0919 22:47:34.584394     619 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-vip-ha-326307\" already exists" pod="kube-system/kube-vip-ha-326307"
	Sep 19 22:47:34 ha-326307 kubelet[619]: I0919 22:47:34.584433     619 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-326307"
	Sep 19 22:47:34 ha-326307 kubelet[619]: E0919 22:47:34.601070     619 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-326307\" already exists" pod="kube-system/etcd-ha-326307"
	Sep 19 22:47:34 ha-326307 kubelet[619]: E0919 22:47:34.605184     619 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-326307\" already exists" pod="kube-system/kube-controller-manager-ha-326307"
	Sep 19 22:47:34 ha-326307 kubelet[619]: I0919 22:47:34.629181     619 kubelet_node_status.go:124] "Node was previously registered" node="ha-326307"
	Sep 19 22:47:34 ha-326307 kubelet[619]: I0919 22:47:34.629461     619 kubelet_node_status.go:78] "Successfully registered node" node="ha-326307"
	Sep 19 22:47:34 ha-326307 kubelet[619]: I0919 22:47:34.629518     619 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:47:34 ha-326307 kubelet[619]: I0919 22:47:34.630916     619 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:47:34 ha-326307 kubelet[619]: I0919 22:47:34.633177     619 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 19 22:47:34 ha-326307 kubelet[619]: I0919 22:47:34.722672     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-xtables-lock\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:47:34 ha-326307 kubelet[619]: I0919 22:47:34.722714     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cafe04c6-2dce-4b93-b6d1-205efc39b360-tmp\") pod \"storage-provisioner\" (UID: \"cafe04c6-2dce-4b93-b6d1-205efc39b360\") " pod="kube-system/storage-provisioner"
	Sep 19 22:47:34 ha-326307 kubelet[619]: I0919 22:47:34.723101     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-cni-cfg\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:47:34 ha-326307 kubelet[619]: I0919 22:47:34.723236     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70be5fcc-7ab6-4eb1-870d-988fee1a01bb-xtables-lock\") pod \"kube-proxy-8kxtv\" (UID: \"70be5fcc-7ab6-4eb1-870d-988fee1a01bb\") " pod="kube-system/kube-proxy-8kxtv"
	Sep 19 22:47:34 ha-326307 kubelet[619]: I0919 22:47:34.723297     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fa827fc-0ba7-49b7-a225-e36d76241d92-lib-modules\") pod \"kindnet-gxnzs\" (UID: \"4fa827fc-0ba7-49b7-a225-e36d76241d92\") " pod="kube-system/kindnet-gxnzs"
	Sep 19 22:47:34 ha-326307 kubelet[619]: I0919 22:47:34.723337     619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70be5fcc-7ab6-4eb1-870d-988fee1a01bb-lib-modules\") pod \"kube-proxy-8kxtv\" (UID: \"70be5fcc-7ab6-4eb1-870d-988fee1a01bb\") " pod="kube-system/kube-proxy-8kxtv"
	Sep 19 22:47:39 ha-326307 kubelet[619]: I0919 22:47:39.475831     619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 19 22:47:41 ha-326307 kubelet[619]: I0919 22:47:41.136886     619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 19 22:48:05 ha-326307 kubelet[619]: I0919 22:48:05.761197     619 scope.go:117] "RemoveContainer" containerID="bb7d0d80b9c2303a244cfffc060085bd15528b57546277cdaaaf2dd707b68f1f"
	Sep 19 22:48:05 ha-326307 kubelet[619]: I0919 22:48:05.761766     619 scope.go:117] "RemoveContainer" containerID="dea19b72709181d4a859a25f85247f5072fef30f2d6610cfa153436eb7b2884b"
	Sep 19 22:48:05 ha-326307 kubelet[619]: E0919 22:48:05.762061     619 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cafe04c6-2dce-4b93-b6d1-205efc39b360)\"" pod="kube-system/storage-provisioner" podUID="cafe04c6-2dce-4b93-b6d1-205efc39b360"
	Sep 19 22:48:19 ha-326307 kubelet[619]: I0919 22:48:19.553305     619 scope.go:117] "RemoveContainer" containerID="dea19b72709181d4a859a25f85247f5072fef30f2d6610cfa153436eb7b2884b"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-326307 -n ha-326307
helpers_test.go:269: (dbg) Run:  kubectl --context ha-326307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-n7chr
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartCluster]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-326307 describe pod busybox-7b57f96db7-n7chr
helpers_test.go:290: (dbg) kubectl --context ha-326307 describe pod busybox-7b57f96db7-n7chr:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-n7chr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fzr8g (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fzr8g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  6m30s                  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  6m30s                  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  6m29s                  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  6m29s                  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  5m41s                  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  41s                    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  6m30s (x2 over 6m31s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  42s (x2 over 5m42s)    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (353.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (9.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-364197 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-364197 -n no-preload-364197
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-364197 -n no-preload-364197: exit status 2 (326.710153ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-364197 -n no-preload-364197
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-364197 -n no-preload-364197: exit status 2 (412.796571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-364197 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-364197 -n no-preload-364197
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-364197 -n no-preload-364197: exit status 2 (410.539135ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-364197 -n no-preload-364197
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-364197 -n no-preload-364197: exit status 2 (411.18945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-364197
helpers_test.go:243: (dbg) docker inspect no-preload-364197:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "29ec8599971bca540130714385d35c1912e3f64ae96dadf6085a0ebd160bd352",
	        "Created": "2025-09-19T23:10:34.49485581Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286872,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T23:12:11.949430547Z",
	            "FinishedAt": "2025-09-19T23:12:10.984909383Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/29ec8599971bca540130714385d35c1912e3f64ae96dadf6085a0ebd160bd352/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/29ec8599971bca540130714385d35c1912e3f64ae96dadf6085a0ebd160bd352/hostname",
	        "HostsPath": "/var/lib/docker/containers/29ec8599971bca540130714385d35c1912e3f64ae96dadf6085a0ebd160bd352/hosts",
	        "LogPath": "/var/lib/docker/containers/29ec8599971bca540130714385d35c1912e3f64ae96dadf6085a0ebd160bd352/29ec8599971bca540130714385d35c1912e3f64ae96dadf6085a0ebd160bd352-json.log",
	        "Name": "/no-preload-364197",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-364197:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-364197",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "29ec8599971bca540130714385d35c1912e3f64ae96dadf6085a0ebd160bd352",
	                "LowerDir": "/var/lib/docker/overlay2/b3feb910e66685d1f0777cc789221b1f5d4f7f0332bc96a2a55a77144d4aa72a-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3feb910e66685d1f0777cc789221b1f5d4f7f0332bc96a2a55a77144d4aa72a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3feb910e66685d1f0777cc789221b1f5d4f7f0332bc96a2a55a77144d4aa72a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3feb910e66685d1f0777cc789221b1f5d4f7f0332bc96a2a55a77144d4aa72a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-364197",
	                "Source": "/var/lib/docker/volumes/no-preload-364197/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-364197",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-364197",
	                "name.minikube.sigs.k8s.io": "no-preload-364197",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "20ec0619219617a12e62eabacfcb49e9df8e2245240fe7e04185e99ea01a00ae",
	            "SandboxKey": "/var/run/docker/netns/20ec06192196",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-364197": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:c5:56:e8:e1:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "76962f0867a93383c165f73f4cfd146d75602db376a54d44233a14e1bb615aac",
	                    "EndpointID": "1ebc872fb88cb4c12fb74342fc521e14de6cc9bd0e41de3ce64ef033532d9820",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-364197",
	                        "29ec8599971b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-364197 -n no-preload-364197
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-364197 -n no-preload-364197: exit status 2 (397.733527ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-364197 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-364197 logs -n 25: (2.2528694s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p old-k8s-version-757990 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:11 UTC │ 19 Sep 25 23:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-364197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:11 UTC │ 19 Sep 25 23:11 UTC │
	│ stop    │ -p no-preload-364197 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:11 UTC │ 19 Sep 25 23:12 UTC │
	│ addons  │ enable dashboard -p no-preload-364197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ start   │ -p no-preload-364197 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-403962 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ stop    │ -p embed-certs-403962 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ image   │ old-k8s-version-757990 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ pause   │ -p old-k8s-version-757990 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ unpause │ -p old-k8s-version-757990 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ delete  │ -p old-k8s-version-757990                                                                                                                                                                                                                           │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ delete  │ -p old-k8s-version-757990                                                                                                                                                                                                                           │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ delete  │ -p disable-driver-mounts-606373                                                                                                                                                                                                                     │ disable-driver-mounts-606373 │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ start   │ -p default-k8s-diff-port-149888 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-149888 │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-403962 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ start   │ -p embed-certs-403962 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:13 UTC │
	│ start   │ -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-430859    │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-430859    │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ delete  │ -p kubernetes-upgrade-430859                                                                                                                                                                                                                        │ kubernetes-upgrade-430859    │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ start   │ -p newest-cni-312465 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-312465            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │                     │
	│ image   │ no-preload-364197 image list --format=json                                                                                                                                                                                                          │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ pause   │ -p no-preload-364197 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ unpause │ -p no-preload-364197 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ image   │ embed-certs-403962 image list --format=json                                                                                                                                                                                                         │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ pause   │ -p embed-certs-403962 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:13:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:13:27.238593  304826 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:13:27.238920  304826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:13:27.238933  304826 out.go:374] Setting ErrFile to fd 2...
	I0919 23:13:27.238939  304826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:13:27.239301  304826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 23:13:27.240254  304826 out.go:368] Setting JSON to false
	I0919 23:13:27.242293  304826 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6951,"bootTime":1758316656,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:13:27.242391  304826 start.go:140] virtualization: kvm guest
	I0919 23:13:27.245079  304826 out.go:179] * [newest-cni-312465] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:13:27.247014  304826 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:13:27.247038  304826 notify.go:220] Checking for updates...
	I0919 23:13:27.250017  304826 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:13:27.251473  304826 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:13:27.253044  304826 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 23:13:27.254720  304826 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:13:27.256145  304826 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:13:27.258280  304826 config.go:182] Loaded profile config "default-k8s-diff-port-149888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:27.258431  304826 config.go:182] Loaded profile config "embed-certs-403962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:27.258597  304826 config.go:182] Loaded profile config "no-preload-364197": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:27.258738  304826 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:13:27.288883  304826 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:13:27.288975  304826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:13:27.365354  304826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 23:13:27.353196914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:13:27.365506  304826 docker.go:318] overlay module found
	I0919 23:13:27.367763  304826 out.go:179] * Using the docker driver based on user configuration
	I0919 23:13:27.369311  304826 start.go:304] selected driver: docker
	I0919 23:13:27.369334  304826 start.go:918] validating driver "docker" against <nil>
	I0919 23:13:27.369348  304826 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:13:27.370111  304826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:13:27.453927  304826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 23:13:27.442609844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:13:27.454140  304826 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W0919 23:13:27.454193  304826 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0919 23:13:27.454507  304826 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0919 23:13:27.457066  304826 out.go:179] * Using Docker driver with root privileges
	I0919 23:13:27.458665  304826 cni.go:84] Creating CNI manager for ""
	I0919 23:13:27.458745  304826 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:13:27.458755  304826 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 23:13:27.458835  304826 start.go:348] cluster config:
	{Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:13:27.460214  304826 out.go:179] * Starting "newest-cni-312465" primary control-plane node in "newest-cni-312465" cluster
	I0919 23:13:27.461705  304826 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 23:13:27.463479  304826 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:13:27.464969  304826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:13:27.465036  304826 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 23:13:27.465066  304826 cache.go:58] Caching tarball of preloaded images
	I0919 23:13:27.465145  304826 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:13:27.465211  304826 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 23:13:27.465224  304826 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 23:13:27.465373  304826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/config.json ...
	I0919 23:13:27.465402  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/config.json: {Name:mkbe0b2096af0dfcb672d8d5ff02d95192e51311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:27.491881  304826 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:13:27.491906  304826 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:13:27.491929  304826 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:13:27.491965  304826 start.go:360] acquireMachinesLock for newest-cni-312465: {Name:mkdaed0f91b48ccb0806887f4c48e7b6207e9286 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:13:27.492089  304826 start.go:364] duration metric: took 98.144µs to acquireMachinesLock for "newest-cni-312465"
	I0919 23:13:27.492120  304826 start.go:93] Provisioning new machine with config: &{Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:13:27.492213  304826 start.go:125] createHost starting for "" (driver="docker")
	I0919 23:13:25.986611  294587 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 22.501936199s
	I0919 23:13:25.991147  294587 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:13:25.991278  294587 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I0919 23:13:25.991386  294587 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:13:25.991522  294587 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W0919 23:13:25.316055  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:27.322716  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:27.416884  286555 pod_ready.go:104] pod "coredns-66bc5c9577-xg99k" is not "Ready", error: <nil>
	W0919 23:13:29.942623  286555 pod_ready.go:104] pod "coredns-66bc5c9577-xg99k" is not "Ready", error: <nil>
	I0919 23:13:27.494730  304826 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 23:13:27.494955  304826 start.go:159] libmachine.API.Create for "newest-cni-312465" (driver="docker")
	I0919 23:13:27.494995  304826 client.go:168] LocalClient.Create starting
	I0919 23:13:27.495095  304826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 23:13:27.495131  304826 main.go:141] libmachine: Decoding PEM data...
	I0919 23:13:27.495171  304826 main.go:141] libmachine: Parsing certificate...
	I0919 23:13:27.495239  304826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 23:13:27.495270  304826 main.go:141] libmachine: Decoding PEM data...
	I0919 23:13:27.495286  304826 main.go:141] libmachine: Parsing certificate...
	I0919 23:13:27.495751  304826 cli_runner.go:164] Run: docker network inspect newest-cni-312465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 23:13:27.519239  304826 cli_runner.go:211] docker network inspect newest-cni-312465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 23:13:27.519336  304826 network_create.go:284] running [docker network inspect newest-cni-312465] to gather additional debugging logs...
	I0919 23:13:27.519357  304826 cli_runner.go:164] Run: docker network inspect newest-cni-312465
	W0919 23:13:27.542030  304826 cli_runner.go:211] docker network inspect newest-cni-312465 returned with exit code 1
	I0919 23:13:27.542062  304826 network_create.go:287] error running [docker network inspect newest-cni-312465]: docker network inspect newest-cni-312465: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-312465 not found
	I0919 23:13:27.542075  304826 network_create.go:289] output of [docker network inspect newest-cni-312465]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-312465 not found
	
	** /stderr **
	I0919 23:13:27.542219  304826 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:13:27.573077  304826 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-465af21e2d8d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:b5:e0:20:10:48} reservation:<nil>}
	I0919 23:13:27.574029  304826 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd9dd989b948 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:ff:32:51:a1:28} reservation:<nil>}
	I0919 23:13:27.575058  304826 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d5cd7c460be9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:d8:0f:d3:49:b9} reservation:<nil>}
	I0919 23:13:27.576219  304826 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-eeb244b5b4d9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:19:45:7a:f8:43} reservation:<nil>}
	I0919 23:13:27.577101  304826 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-76962f0867a9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ee:d8:43:3c:3c:e2} reservation:<nil>}
	I0919 23:13:27.578259  304826 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cf1dc0}
	I0919 23:13:27.578290  304826 network_create.go:124] attempt to create docker network newest-cni-312465 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0919 23:13:27.578338  304826 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-312465 newest-cni-312465
	I0919 23:13:27.664074  304826 network_create.go:108] docker network newest-cni-312465 192.168.94.0/24 created
	I0919 23:13:27.664108  304826 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-312465" container
	I0919 23:13:27.664204  304826 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 23:13:27.686848  304826 cli_runner.go:164] Run: docker volume create newest-cni-312465 --label name.minikube.sigs.k8s.io=newest-cni-312465 --label created_by.minikube.sigs.k8s.io=true
	I0919 23:13:27.711517  304826 oci.go:103] Successfully created a docker volume newest-cni-312465
	I0919 23:13:27.711624  304826 cli_runner.go:164] Run: docker run --rm --name newest-cni-312465-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-312465 --entrypoint /usr/bin/test -v newest-cni-312465:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 23:13:28.191316  304826 oci.go:107] Successfully prepared a docker volume newest-cni-312465
	I0919 23:13:28.191366  304826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:13:28.191389  304826 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 23:13:28.191481  304826 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-312465:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 23:13:32.076573  304826 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-312465:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.885033462s)
	I0919 23:13:32.076612  304826 kic.go:203] duration metric: took 3.885218568s to extract preloaded images to volume ...
	W0919 23:13:32.076710  304826 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 23:13:32.076743  304826 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 23:13:32.076794  304826 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 23:13:32.149761  304826 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-312465 --name newest-cni-312465 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-312465 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-312465 --network newest-cni-312465 --ip 192.168.94.2 --volume newest-cni-312465:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 23:13:28.139399  294587 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.148131492s
	I0919 23:13:28.449976  294587 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.458741458s
	I0919 23:13:32.493086  294587 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.501778199s
	I0919 23:13:32.510785  294587 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:13:32.524242  294587 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:13:32.539521  294587 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:13:32.539729  294587 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-149888 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:13:32.551224  294587 kubeadm.go:310] [bootstrap-token] Using token: n81jvw.nat4ajoeag176u3n
	I0919 23:13:32.553385  294587 out.go:252]   - Configuring RBAC rules ...
	I0919 23:13:32.553522  294587 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:13:32.557811  294587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:13:32.567024  294587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:13:32.570531  294587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:13:32.576653  294587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:13:32.580237  294587 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:13:32.901145  294587 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:13:33.324739  294587 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:13:33.900632  294587 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:13:33.901573  294587 kubeadm.go:310] 
	I0919 23:13:33.901667  294587 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:13:33.901677  294587 kubeadm.go:310] 
	I0919 23:13:33.901751  294587 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:13:33.901758  294587 kubeadm.go:310] 
	I0919 23:13:33.901777  294587 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:13:33.901831  294587 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:13:33.901895  294587 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:13:33.901902  294587 kubeadm.go:310] 
	I0919 23:13:33.901944  294587 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:13:33.901974  294587 kubeadm.go:310] 
	I0919 23:13:33.902054  294587 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:13:33.902064  294587 kubeadm.go:310] 
	I0919 23:13:33.902143  294587 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:13:33.902266  294587 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:13:33.902331  294587 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:13:33.902339  294587 kubeadm.go:310] 
	I0919 23:13:33.902406  294587 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:13:33.902479  294587 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:13:33.902485  294587 kubeadm.go:310] 
	I0919 23:13:33.902551  294587 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token n81jvw.nat4ajoeag176u3n \
	I0919 23:13:33.902635  294587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 \
	I0919 23:13:33.902655  294587 kubeadm.go:310] 	--control-plane 
	I0919 23:13:33.902661  294587 kubeadm.go:310] 
	I0919 23:13:33.902730  294587 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:13:33.902737  294587 kubeadm.go:310] 
	I0919 23:13:33.902801  294587 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token n81jvw.nat4ajoeag176u3n \
	I0919 23:13:33.902883  294587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 
	I0919 23:13:33.906239  294587 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:13:33.906372  294587 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:13:33.906402  294587 cni.go:84] Creating CNI manager for ""
	I0919 23:13:33.906416  294587 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:13:33.908216  294587 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W0919 23:13:29.819116  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:31.826948  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:34.316941  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	I0919 23:13:32.476430  304826 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Running}}
	I0919 23:13:32.500104  304826 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:13:32.523104  304826 cli_runner.go:164] Run: docker exec newest-cni-312465 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:13:32.578263  304826 oci.go:144] the created container "newest-cni-312465" has a running status.
	I0919 23:13:32.578295  304826 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa...
	I0919 23:13:32.976039  304826 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:13:33.009077  304826 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:13:33.031547  304826 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:13:33.031565  304826 kic_runner.go:114] Args: [docker exec --privileged newest-cni-312465 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:13:33.092603  304826 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:13:33.115283  304826 machine.go:93] provisionDockerMachine start ...
	I0919 23:13:33.115380  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:33.139784  304826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:13:33.140058  304826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I0919 23:13:33.140073  304826 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:13:33.290427  304826 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-312465
	
	I0919 23:13:33.290458  304826 ubuntu.go:182] provisioning hostname "newest-cni-312465"
	I0919 23:13:33.290507  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:33.316275  304826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:13:33.316511  304826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I0919 23:13:33.316526  304826 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-312465 && echo "newest-cni-312465" | sudo tee /etc/hostname
	I0919 23:13:33.472768  304826 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-312465
	
	I0919 23:13:33.472864  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:33.494111  304826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:13:33.494398  304826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I0919 23:13:33.494430  304826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-312465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-312465/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-312465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:13:33.635421  304826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:13:33.635451  304826 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 23:13:33.635494  304826 ubuntu.go:190] setting up certificates
	I0919 23:13:33.635517  304826 provision.go:84] configureAuth start
	I0919 23:13:33.635574  304826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:13:33.655878  304826 provision.go:143] copyHostCerts
	I0919 23:13:33.655961  304826 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 23:13:33.655977  304826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 23:13:33.656058  304826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 23:13:33.656241  304826 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 23:13:33.656255  304826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 23:13:33.656304  304826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 23:13:33.656405  304826 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 23:13:33.656415  304826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 23:13:33.656457  304826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 23:13:33.656554  304826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.newest-cni-312465 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-312465]
	I0919 23:13:34.255292  304826 provision.go:177] copyRemoteCerts
	I0919 23:13:34.255368  304826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:13:34.255413  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.284316  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:34.387988  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 23:13:34.419504  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0919 23:13:34.448496  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:13:34.475661  304826 provision.go:87] duration metric: took 840.126723ms to configureAuth
	I0919 23:13:34.475694  304826 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:13:34.475872  304826 config.go:182] Loaded profile config "newest-cni-312465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:34.475881  304826 machine.go:96] duration metric: took 1.360576611s to provisionDockerMachine
	I0919 23:13:34.475891  304826 client.go:171] duration metric: took 6.980885128s to LocalClient.Create
	I0919 23:13:34.475913  304826 start.go:167] duration metric: took 6.980958258s to libmachine.API.Create "newest-cni-312465"
	I0919 23:13:34.475926  304826 start.go:293] postStartSetup for "newest-cni-312465" (driver="docker")
	I0919 23:13:34.475937  304826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:13:34.475995  304826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:13:34.476029  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.496668  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:34.598095  304826 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:13:34.602045  304826 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:13:34.602091  304826 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:13:34.602104  304826 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:13:34.602111  304826 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:13:34.602121  304826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 23:13:34.602190  304826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 23:13:34.602281  304826 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 23:13:34.602369  304826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:13:34.612660  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:13:34.643262  304826 start.go:296] duration metric: took 167.32169ms for postStartSetup
	I0919 23:13:34.643684  304826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:13:34.663272  304826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/config.json ...
	I0919 23:13:34.663583  304826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:13:34.663633  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.683961  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:34.779205  304826 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:13:34.785070  304826 start.go:128] duration metric: took 7.292838847s to createHost
	I0919 23:13:34.785099  304826 start.go:83] releasing machines lock for "newest-cni-312465", held for 7.292995602s
	I0919 23:13:34.785189  304826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:13:34.807464  304826 ssh_runner.go:195] Run: cat /version.json
	I0919 23:13:34.807503  304826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:13:34.807575  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.807583  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.829219  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:34.829637  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:35.008352  304826 ssh_runner.go:195] Run: systemctl --version
	I0919 23:13:35.013908  304826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:13:35.019269  304826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:13:35.055596  304826 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:13:35.055680  304826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:13:35.090798  304826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:13:35.090825  304826 start.go:495] detecting cgroup driver to use...
	I0919 23:13:35.090862  304826 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:13:35.090925  304826 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 23:13:35.106670  304826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:13:35.120167  304826 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:13:35.120229  304826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:13:35.136229  304826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:13:35.152080  304826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:13:35.229432  304826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:13:35.314675  304826 docker.go:234] disabling docker service ...
	I0919 23:13:35.314746  304826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:13:35.336969  304826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:13:35.352061  304826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:13:35.433841  304826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:13:35.511892  304826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:13:35.525179  304826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:13:35.544848  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:13:35.558556  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:13:35.570787  304826 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:13:35.570874  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:13:35.583714  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:13:35.596563  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:13:35.608811  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:13:35.621274  304826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:13:35.632671  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:13:35.646560  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:13:35.659112  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:13:35.671491  304826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:13:35.681987  304826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:13:35.693319  304826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:13:35.765943  304826 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:13:35.900474  304826 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 23:13:35.900553  304826 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 23:13:35.904775  304826 start.go:563] Will wait 60s for crictl version
	I0919 23:13:35.904838  304826 ssh_runner.go:195] Run: which crictl
	I0919 23:13:35.908969  304826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:13:35.948499  304826 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 23:13:35.948718  304826 ssh_runner.go:195] Run: containerd --version
	I0919 23:13:35.976417  304826 ssh_runner.go:195] Run: containerd --version
	I0919 23:13:36.005950  304826 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 23:13:36.007659  304826 cli_runner.go:164] Run: docker network inspect newest-cni-312465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:13:36.028772  304826 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0919 23:13:36.033878  304826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:13:36.053802  304826 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W0919 23:13:31.971038  286555 pod_ready.go:104] pod "coredns-66bc5c9577-xg99k" is not "Ready", error: <nil>
	W0919 23:13:34.412827  286555 pod_ready.go:104] pod "coredns-66bc5c9577-xg99k" is not "Ready", error: <nil>
	I0919 23:13:36.412824  286555 pod_ready.go:94] pod "coredns-66bc5c9577-xg99k" is "Ready"
	I0919 23:13:36.412859  286555 pod_ready.go:86] duration metric: took 1m14.00590752s for pod "coredns-66bc5c9577-xg99k" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.415705  286555 pod_ready.go:83] waiting for pod "etcd-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.420550  286555 pod_ready.go:94] pod "etcd-no-preload-364197" is "Ready"
	I0919 23:13:36.420580  286555 pod_ready.go:86] duration metric: took 4.848977ms for pod "etcd-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.423284  286555 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.428673  286555 pod_ready.go:94] pod "kube-apiserver-no-preload-364197" is "Ready"
	I0919 23:13:36.428703  286555 pod_ready.go:86] duration metric: took 5.394829ms for pod "kube-apiserver-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.431305  286555 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.610936  286555 pod_ready.go:94] pod "kube-controller-manager-no-preload-364197" is "Ready"
	I0919 23:13:36.610963  286555 pod_ready.go:86] duration metric: took 179.625984ms for pod "kube-controller-manager-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.056701  304826 kubeadm.go:875] updating cluster {Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:13:36.056877  304826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:13:36.057030  304826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:13:36.099591  304826 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:13:36.099615  304826 containerd.go:534] Images already preloaded, skipping extraction
	I0919 23:13:36.099675  304826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:13:36.143373  304826 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:13:36.143413  304826 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:13:36.143421  304826 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 containerd true true} ...
	I0919 23:13:36.143508  304826 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-312465 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:13:36.143562  304826 ssh_runner.go:195] Run: sudo crictl info
	I0919 23:13:36.185797  304826 cni.go:84] Creating CNI manager for ""
	I0919 23:13:36.185828  304826 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:13:36.185843  304826 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0919 23:13:36.185875  304826 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-312465 NodeName:newest-cni-312465 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:13:36.186182  304826 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-312465"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:13:36.186269  304826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:13:36.198096  304826 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:13:36.198546  304826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:13:36.214736  304826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0919 23:13:36.244125  304826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:13:36.270995  304826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I0919 23:13:36.295177  304826 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:13:36.299365  304826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:13:36.313119  304826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:13:36.396378  304826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:13:36.418497  304826 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465 for IP: 192.168.94.2
	I0919 23:13:36.418522  304826 certs.go:194] generating shared ca certs ...
	I0919 23:13:36.418544  304826 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.418705  304826 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 23:13:36.418761  304826 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 23:13:36.418775  304826 certs.go:256] generating profile certs ...
	I0919 23:13:36.418843  304826 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.key
	I0919 23:13:36.418860  304826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.crt with IP's: []
	I0919 23:13:36.531217  304826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.crt ...
	I0919 23:13:36.531247  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.crt: {Name:mk2dead7c7dd4abba877b10a34bd54e0741b0c51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.531436  304826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.key ...
	I0919 23:13:36.531449  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.key: {Name:mkb2dce7d200188d9475ab5211c83bb5dd871bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.531531  304826 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb
	I0919 23:13:36.531547  304826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt.41c88afb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0919 23:13:36.764681  304826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt.41c88afb ...
	I0919 23:13:36.764719  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt.41c88afb: {Name:mkd78eb5b6eba4ac120b530170a9a115208fec96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.764949  304826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb ...
	I0919 23:13:36.764969  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb: {Name:mk23f979dad453c3780b4813b8fc576ea9e94f4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.765077  304826 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt.41c88afb -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt
	I0919 23:13:36.765208  304826 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key
	I0919 23:13:36.765299  304826 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key
	I0919 23:13:36.765323  304826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt with IP's: []
	I0919 23:13:36.811680  286555 pod_ready.go:83] waiting for pod "kube-proxy-t4j4z" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:37.211272  286555 pod_ready.go:94] pod "kube-proxy-t4j4z" is "Ready"
	I0919 23:13:37.211303  286555 pod_ready.go:86] duration metric: took 399.591313ms for pod "kube-proxy-t4j4z" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:37.410092  286555 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:37.810858  286555 pod_ready.go:94] pod "kube-scheduler-no-preload-364197" is "Ready"
	I0919 23:13:37.810890  286555 pod_ready.go:86] duration metric: took 400.769138ms for pod "kube-scheduler-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:37.810907  286555 pod_ready.go:40] duration metric: took 1m15.409243632s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:13:37.871652  286555 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:13:37.873712  286555 out.go:179] * Done! kubectl is now configured to use "no-preload-364197" cluster and "default" namespace by default
	I0919 23:13:33.909671  294587 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 23:13:33.914917  294587 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 23:13:33.914945  294587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 23:13:33.936898  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 23:13:34.176650  294587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:13:34.176752  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:34.176780  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-149888 minikube.k8s.io/updated_at=2025_09_19T23_13_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=default-k8s-diff-port-149888 minikube.k8s.io/primary=true
	I0919 23:13:34.185919  294587 ops.go:34] apiserver oom_adj: -16
	I0919 23:13:34.285582  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:34.786386  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:35.286435  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:35.786591  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:36.286349  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:36.786365  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:37.286088  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:37.786249  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:38.286182  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:38.381035  294587 kubeadm.go:1105] duration metric: took 4.204361703s to wait for elevateKubeSystemPrivileges
	I0919 23:13:38.381076  294587 kubeadm.go:394] duration metric: took 40.106256802s to StartCluster
	I0919 23:13:38.381101  294587 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:38.381208  294587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:13:38.383043  294587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:38.383384  294587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:13:38.383418  294587 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:13:38.383497  294587 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:13:38.383584  294587 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-149888"
	I0919 23:13:38.383599  294587 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-149888"
	I0919 23:13:38.383622  294587 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-149888"
	I0919 23:13:38.383623  294587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-149888"
	I0919 23:13:38.383638  294587 config.go:182] Loaded profile config "default-k8s-diff-port-149888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:38.383654  294587 host.go:66] Checking if "default-k8s-diff-port-149888" exists ...
	I0919 23:13:38.384100  294587 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:13:38.384352  294587 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:13:38.386876  294587 out.go:179] * Verifying Kubernetes components...
	I0919 23:13:38.392366  294587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:13:38.414274  294587 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:13:37.730859  304826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt ...
	I0919 23:13:37.730889  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt: {Name:mka643fd8f3814e682ac62f488ac921be438271e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:37.731102  304826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key ...
	I0919 23:13:37.731122  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key: {Name:mk1e0a6b750f125c5af55b66a1efb72f4537d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:37.731375  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 23:13:37.731416  304826 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 23:13:37.731424  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 23:13:37.731453  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:13:37.731475  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:13:37.731496  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 23:13:37.731531  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:13:37.732086  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:13:37.760205  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:13:37.788964  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:13:37.821273  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:13:37.854511  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 23:13:37.886302  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:13:37.919585  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:13:37.949973  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:13:37.982330  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:13:38.018976  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 23:13:38.049608  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 23:13:38.081886  304826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:13:38.109125  304826 ssh_runner.go:195] Run: openssl version
	I0919 23:13:38.118278  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 23:13:38.133041  304826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 23:13:38.138504  304826 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 23:13:38.138570  304826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 23:13:38.147725  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 23:13:38.160519  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 23:13:38.174178  304826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 23:13:38.179241  304826 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 23:13:38.179303  304826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 23:13:38.188486  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:13:38.203742  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:13:38.216299  304826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:13:38.221016  304826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:13:38.221087  304826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:13:38.229132  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:13:38.242362  304826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:13:38.247181  304826 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:13:38.247247  304826 kubeadm.go:392] StartCluster: {Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:13:38.247335  304826 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 23:13:38.247392  304826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:13:38.289664  304826 cri.go:89] found id: ""
	I0919 23:13:38.289745  304826 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:13:38.300688  304826 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:13:38.314602  304826 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:13:38.314666  304826 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:13:38.328513  304826 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:13:38.328532  304826 kubeadm.go:157] found existing configuration files:
	
	I0919 23:13:38.328573  304826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:13:38.340801  304826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:13:38.340902  304826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:13:38.354142  304826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:13:38.367990  304826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:13:38.368067  304826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:13:38.379710  304826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:13:38.393587  304826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:13:38.393654  304826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:13:38.406457  304826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:13:38.423007  304826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:13:38.423071  304826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:13:38.441889  304826 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:13:38.509349  304826 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:13:38.509425  304826 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:13:38.535354  304826 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:13:38.535436  304826 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:13:38.535487  304826 kubeadm.go:310] OS: Linux
	I0919 23:13:38.535547  304826 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:13:38.535585  304826 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:13:38.535633  304826 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:13:38.535689  304826 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:13:38.535753  304826 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:13:38.535813  304826 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:13:38.535850  304826 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:13:38.535885  304826 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:13:38.621848  304826 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:13:38.622065  304826 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:13:38.622186  304826 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:13:38.630978  304826 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:13:38.415345  294587 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:13:38.415366  294587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:13:38.415418  294587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-149888
	I0919 23:13:38.415735  294587 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-149888"
	I0919 23:13:38.415780  294587 host.go:66] Checking if "default-k8s-diff-port-149888" exists ...
	I0919 23:13:38.416297  294587 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:13:38.445969  294587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/default-k8s-diff-port-149888/id_rsa Username:docker}
	I0919 23:13:38.447208  294587 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:13:38.447231  294587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:13:38.447297  294587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-149888
	I0919 23:13:38.480457  294587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/default-k8s-diff-port-149888/id_rsa Username:docker}
	I0919 23:13:38.540300  294587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:13:38.557619  294587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:13:38.594341  294587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:13:38.630764  294587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:13:38.799085  294587 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0919 23:13:38.800978  294587 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-149888" to be "Ready" ...
	I0919 23:13:38.812605  294587 node_ready.go:49] node "default-k8s-diff-port-149888" is "Ready"
	I0919 23:13:38.812642  294587 node_ready.go:38] duration metric: took 11.622008ms for node "default-k8s-diff-port-149888" to be "Ready" ...
	I0919 23:13:38.812666  294587 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:13:38.812750  294587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:13:39.036443  294587 api_server.go:72] duration metric: took 652.97537ms to wait for apiserver process to appear ...
	I0919 23:13:39.036471  294587 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:13:39.036490  294587 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0919 23:13:39.043372  294587 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0919 23:13:39.047190  294587 api_server.go:141] control plane version: v1.34.0
	I0919 23:13:39.047226  294587 api_server.go:131] duration metric: took 10.747839ms to wait for apiserver health ...
	I0919 23:13:39.047237  294587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:13:39.049788  294587 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W0919 23:13:36.317685  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:38.318647  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	I0919 23:13:39.819987  295194 pod_ready.go:94] pod "coredns-66bc5c9577-t6v26" is "Ready"
	I0919 23:13:39.820015  295194 pod_ready.go:86] duration metric: took 37.509771492s for pod "coredns-66bc5c9577-t6v26" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.822985  295194 pod_ready.go:83] waiting for pod "etcd-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.827553  295194 pod_ready.go:94] pod "etcd-embed-certs-403962" is "Ready"
	I0919 23:13:39.827574  295194 pod_ready.go:86] duration metric: took 4.567201ms for pod "etcd-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.829949  295194 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.834015  295194 pod_ready.go:94] pod "kube-apiserver-embed-certs-403962" is "Ready"
	I0919 23:13:39.834041  295194 pod_ready.go:86] duration metric: took 4.068136ms for pod "kube-apiserver-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.836103  295194 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:40.014492  295194 pod_ready.go:94] pod "kube-controller-manager-embed-certs-403962" is "Ready"
	I0919 23:13:40.014519  295194 pod_ready.go:86] duration metric: took 178.389529ms for pod "kube-controller-manager-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:40.214694  295194 pod_ready.go:83] waiting for pod "kube-proxy-5tf2s" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:40.614193  295194 pod_ready.go:94] pod "kube-proxy-5tf2s" is "Ready"
	I0919 23:13:40.614222  295194 pod_ready.go:86] duration metric: took 399.49287ms for pod "kube-proxy-5tf2s" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:40.814999  295194 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:41.214398  295194 pod_ready.go:94] pod "kube-scheduler-embed-certs-403962" is "Ready"
	I0919 23:13:41.214429  295194 pod_ready.go:86] duration metric: took 399.403485ms for pod "kube-scheduler-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:41.214439  295194 pod_ready.go:40] duration metric: took 38.913620351s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:13:41.267599  295194 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:13:41.270700  295194 out.go:179] * Done! kubectl is now configured to use "embed-certs-403962" cluster and "default" namespace by default
	I0919 23:13:38.634403  304826 out.go:252]   - Generating certificates and keys ...
	I0919 23:13:38.634645  304826 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:13:38.634729  304826 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:13:38.733514  304826 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:13:39.062476  304826 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:13:39.133445  304826 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:13:39.439953  304826 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:13:39.872072  304826 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:13:39.872221  304826 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-312465] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0919 23:13:39.972922  304826 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:13:39.973129  304826 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-312465] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0919 23:13:40.957549  304826 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:13:41.144394  304826 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:13:41.426739  304826 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:13:41.426849  304826 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:13:41.554555  304826 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:13:41.608199  304826 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:13:41.645796  304826 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:13:41.778911  304826 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:13:41.900942  304826 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:13:41.901396  304826 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:13:41.905522  304826 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:13:41.907209  304826 out.go:252]   - Booting up control plane ...
	I0919 23:13:41.907335  304826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:13:41.907460  304826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:13:41.907982  304826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:13:41.919781  304826 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:13:41.919920  304826 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:13:41.926298  304826 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:13:41.926476  304826 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:13:41.926547  304826 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:13:42.017500  304826 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:13:42.017660  304826 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:13:39.052217  294587 addons.go:514] duration metric: took 668.711417ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 23:13:39.053005  294587 system_pods.go:59] 9 kube-system pods found
	I0919 23:13:39.053044  294587 system_pods.go:61] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.053057  294587 system_pods.go:61] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.053070  294587 system_pods.go:61] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:13:39.053085  294587 system_pods.go:61] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 23:13:39.053092  294587 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:39.053105  294587 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:39.053113  294587 system_pods.go:61] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:39.053135  294587 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:39.053144  294587 system_pods.go:61] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:39.053189  294587 system_pods.go:74] duration metric: took 5.910482ms to wait for pod list to return data ...
	I0919 23:13:39.053205  294587 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:13:39.055828  294587 default_sa.go:45] found service account: "default"
	I0919 23:13:39.055846  294587 default_sa.go:55] duration metric: took 2.635401ms for default service account to be created ...
	I0919 23:13:39.055855  294587 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:13:39.058754  294587 system_pods.go:86] 9 kube-system pods found
	I0919 23:13:39.058787  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.058797  294587 system_pods.go:89] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.058807  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:13:39.058821  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 23:13:39.058830  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:39.058841  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:39.058846  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:39.058852  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:39.058857  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:39.058878  294587 retry.go:31] will retry after 270.945985ms: missing components: kube-dns, kube-proxy
	I0919 23:13:39.304737  294587 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-149888" context rescaled to 1 replicas
	I0919 23:13:39.337213  294587 system_pods.go:86] 9 kube-system pods found
	I0919 23:13:39.337253  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.337265  294587 system_pods.go:89] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.337271  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:39.337278  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:39.337284  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:39.337290  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:39.337298  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:39.337305  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:39.337314  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:39.337335  294587 retry.go:31] will retry after 357.220825ms: missing components: kube-dns
	I0919 23:13:39.698915  294587 system_pods.go:86] 9 kube-system pods found
	I0919 23:13:39.698949  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.698958  294587 system_pods.go:89] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.698966  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:39.698975  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:39.698980  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:39.698987  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:39.698995  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:39.699002  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:39.699013  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:39.699035  294587 retry.go:31] will retry after 375.514546ms: missing components: kube-dns
	I0919 23:13:40.079067  294587 system_pods.go:86] 9 kube-system pods found
	I0919 23:13:40.079105  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:40.079117  294587 system_pods.go:89] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:40.079125  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:40.079131  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:40.079136  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:40.079141  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:40.079148  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:40.079191  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:40.079199  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:40.079216  294587 retry.go:31] will retry after 558.632768ms: missing components: kube-dns
	I0919 23:13:40.643894  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:40.643930  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:40.643938  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:40.643947  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:40.643953  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:40.643960  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:40.643970  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:40.643983  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:40.643989  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:40.644010  294587 retry.go:31] will retry after 761.400913ms: missing components: kube-dns
	I0919 23:13:41.410199  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:41.410236  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:41.410250  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:41.410257  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:41.410263  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:41.410269  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:41.410277  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:41.410285  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:41.410291  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:41.410312  294587 retry.go:31] will retry after 629.477098ms: missing components: kube-dns
	I0919 23:13:42.043664  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:42.043705  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:42.043715  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:42.043724  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:42.043729  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:42.043739  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:42.043747  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:42.043753  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:42.043762  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:42.043778  294587 retry.go:31] will retry after 1.069085397s: missing components: kube-dns
	I0919 23:13:43.117253  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:43.117290  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:43.117297  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:43.117305  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:43.117308  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:43.117312  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:43.117318  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:43.117322  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:43.117326  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:43.117339  294587 retry.go:31] will retry after 1.031094562s: missing components: kube-dns
	I0919 23:13:44.153419  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:44.153454  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:44.153460  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:44.153467  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:44.153472  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:44.153475  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:44.153480  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:44.153484  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:44.153487  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:44.153499  294587 retry.go:31] will retry after 1.715155668s: missing components: kube-dns
	I0919 23:13:45.873736  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:45.873776  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:45.873786  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:45.873794  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:45.873800  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:45.873805  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:45.873820  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:45.873826  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:45.873832  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:45.873863  294587 retry.go:31] will retry after 2.128059142s: missing components: kube-dns
	I0919 23:13:48.006564  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:48.006602  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:48.006610  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:48.006618  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:48.006624  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:48.006630  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:48.006635  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:48.006640  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:48.006647  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:48.006662  294587 retry.go:31] will retry after 1.782367114s: missing components: kube-dns
	I0919 23:13:50.518700  304826 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 8.501106835s
	I0919 23:13:50.522818  304826 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:13:50.522974  304826 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I0919 23:13:50.523114  304826 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:13:50.523256  304826 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:13:49.793148  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:49.793210  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:49.793217  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:49.793223  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:49.793229  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:49.793232  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:49.793243  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:49.793246  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:49.793251  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:49.793265  294587 retry.go:31] will retry after 2.338572613s: missing components: kube-dns
	I0919 23:13:52.140344  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:52.140388  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:52.140397  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:52.140407  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:52.140413  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:52.140419  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:52.140428  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:52.140435  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:52.140442  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:52.140471  294587 retry.go:31] will retry after 3.086457646s: missing components: kube-dns
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	18e79812d03fa       07655ddf2eebe       6 seconds ago        Running             kubernetes-dashboard        2                   e0f9415a2add2       kubernetes-dashboard-855c9754f9-rj7g8
	b9a3ddd53c098       523cad1a4df73       33 seconds ago       Exited              dashboard-metrics-scraper   3                   ae9d710a70595       dashboard-metrics-scraper-6ffb444bf9-vdb6s
	84fd26f46a650       6e38f40d628db       45 seconds ago       Running             storage-provisioner         2                   a8ae1cab0b9d5       storage-provisioner
	1c4ad1fabb59c       df0860106674d       51 seconds ago       Running             kube-proxy                  3                   17832c7bff5ef       kube-proxy-t4j4z
	265b08adaae73       07655ddf2eebe       53 seconds ago       Exited              kubernetes-dashboard        1                   e0f9415a2add2       kubernetes-dashboard-855c9754f9-rj7g8
	02daa0a76ef2f       df0860106674d       About a minute ago   Exited              kube-proxy                  2                   17832c7bff5ef       kube-proxy-t4j4z
	70e9a93e23676       409467f978b4a       About a minute ago   Running             kindnet-cni                 1                   8c60e0ef68a3d       kindnet-89psw
	e6ce4bf79ede2       56cc512116c8f       About a minute ago   Running             busybox                     1                   b54c425145c6f       busybox
	658e15ecef2da       52546a367cc9e       About a minute ago   Running             coredns                     1                   ea0e1058dc597       coredns-66bc5c9577-xg99k
	b99df30ad6006       6e38f40d628db       About a minute ago   Exited              storage-provisioner         1                   a8ae1cab0b9d5       storage-provisioner
	9cfac78cf1230       46169d968e920       About a minute ago   Running             kube-scheduler              1                   667c2756f1609       kube-scheduler-no-preload-364197
	cefe6d56503ab       a0af72f2ec6d6       About a minute ago   Running             kube-controller-manager     1                   c42fe513623c8       kube-controller-manager-no-preload-364197
	6bc33bc1397be       90550c43ad2bc       About a minute ago   Running             kube-apiserver              1                   1c48d024bf452       kube-apiserver-no-preload-364197
	43f594ae64f30       5f1f5298c888d       About a minute ago   Running             etcd                        1                   3699fbc2f2946       etcd-no-preload-364197
	caa9c3c72ac21       56cc512116c8f       2 minutes ago        Exited              busybox                     0                   6708bd673a9d3       busybox
	81516948c500c       52546a367cc9e       2 minutes ago        Exited              coredns                     0                   ddc11fc450968       coredns-66bc5c9577-xg99k
	05a104972ade2       409467f978b4a       2 minutes ago        Exited              kindnet-cni                 0                   5ff81d48d2530       kindnet-89psw
	4033582ceab6b       46169d968e920       2 minutes ago        Exited              kube-scheduler              0                   7363cbc4e9ca2       kube-scheduler-no-preload-364197
	4f4b7cb19d71d       90550c43ad2bc       2 minutes ago        Exited              kube-apiserver              0                   66d2c4b632c14       kube-apiserver-no-preload-364197
	22f8dfd9e25c5       5f1f5298c888d       2 minutes ago        Exited              etcd                        0                   b9ab3a8d8a6f5       etcd-no-preload-364197
	a725a4bde25d4       a0af72f2ec6d6       2 minutes ago        Exited              kube-controller-manager     0                   decb39ebcb9ea       kube-controller-manager-no-preload-364197
	
	
	==> containerd <==
	Sep 19 23:13:20 no-preload-364197 containerd[472]: time="2025-09-19T23:13:20.650214781Z" level=info msg="StartContainer for \"b9a3ddd53c0982f2b6456f555d5b0736bed2fe411350d742099824f5f0e34944\""
	Sep 19 23:13:20 no-preload-364197 containerd[472]: time="2025-09-19T23:13:20.706596053Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 23:13:20 no-preload-364197 containerd[472]: time="2025-09-19T23:13:20.709743212Z" level=info msg="StartContainer for \"b9a3ddd53c0982f2b6456f555d5b0736bed2fe411350d742099824f5f0e34944\" returns successfully"
	Sep 19 23:13:20 no-preload-364197 containerd[472]: time="2025-09-19T23:13:20.726847249Z" level=info msg="received exit event container_id:\"b9a3ddd53c0982f2b6456f555d5b0736bed2fe411350d742099824f5f0e34944\"  id:\"b9a3ddd53c0982f2b6456f555d5b0736bed2fe411350d742099824f5f0e34944\"  pid:2721  exit_status:1  exited_at:{seconds:1758323600  nanos:726596770}"
	Sep 19 23:13:21 no-preload-364197 containerd[472]: time="2025-09-19T23:13:21.013313342Z" level=info msg="shim disconnected" id=b9a3ddd53c0982f2b6456f555d5b0736bed2fe411350d742099824f5f0e34944 namespace=k8s.io
	Sep 19 23:13:21 no-preload-364197 containerd[472]: time="2025-09-19T23:13:21.013357988Z" level=warning msg="cleaning up after shim disconnected" id=b9a3ddd53c0982f2b6456f555d5b0736bed2fe411350d742099824f5f0e34944 namespace=k8s.io
	Sep 19 23:13:21 no-preload-364197 containerd[472]: time="2025-09-19T23:13:21.013369773Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 19 23:13:21 no-preload-364197 containerd[472]: time="2025-09-19T23:13:21.567199937Z" level=info msg="RemoveContainer for \"9251dd456de174d10454fb5a3976796e4ab4d8c60e48c2f9db73321ed89ecc44\""
	Sep 19 23:13:21 no-preload-364197 containerd[472]: time="2025-09-19T23:13:21.572481311Z" level=info msg="RemoveContainer for \"9251dd456de174d10454fb5a3976796e4ab4d8c60e48c2f9db73321ed89ecc44\" returns successfully"
	Sep 19 23:13:30 no-preload-364197 containerd[472]: time="2025-09-19T23:13:30.666506678Z" level=info msg="received exit event container_id:\"265b08adaae7361b442c8dbb2cece3f4be85d7eeb4a1035bce7e3fc80dfe2381\"  id:\"265b08adaae7361b442c8dbb2cece3f4be85d7eeb4a1035bce7e3fc80dfe2381\"  pid:2471  exit_status:2  exited_at:{seconds:1758323610  nanos:666208398}"
	Sep 19 23:13:32 no-preload-364197 containerd[472]: time="2025-09-19T23:13:32.058629920Z" level=info msg="shim disconnected" id=265b08adaae7361b442c8dbb2cece3f4be85d7eeb4a1035bce7e3fc80dfe2381 namespace=k8s.io
	Sep 19 23:13:32 no-preload-364197 containerd[472]: time="2025-09-19T23:13:32.058837363Z" level=warning msg="cleaning up after shim disconnected" id=265b08adaae7361b442c8dbb2cece3f4be85d7eeb4a1035bce7e3fc80dfe2381 namespace=k8s.io
	Sep 19 23:13:32 no-preload-364197 containerd[472]: time="2025-09-19T23:13:32.058860582Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 19 23:13:32 no-preload-364197 containerd[472]: time="2025-09-19T23:13:32.603990752Z" level=info msg="RemoveContainer for \"9fd9bbb959bd038c892ae7e6c20d1f677c36aea7f9bc40f6506a193549bf3674\""
	Sep 19 23:13:32 no-preload-364197 containerd[472]: time="2025-09-19T23:13:32.609584008Z" level=info msg="RemoveContainer for \"9fd9bbb959bd038c892ae7e6c20d1f677c36aea7f9bc40f6506a193549bf3674\" returns successfully"
	Sep 19 23:13:46 no-preload-364197 containerd[472]: time="2025-09-19T23:13:46.235478866Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 23:13:46 no-preload-364197 containerd[472]: time="2025-09-19T23:13:46.281311320Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 19 23:13:46 no-preload-364197 containerd[472]: time="2025-09-19T23:13:46.283027512Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 19 23:13:46 no-preload-364197 containerd[472]: time="2025-09-19T23:13:46.283099128Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 19 23:13:47 no-preload-364197 containerd[472]: time="2025-09-19T23:13:47.233298876Z" level=info msg="CreateContainer within sandbox \"e0f9415a2add2d1cb72ecfb8b8814a682dd2353cfa15818eb3d83ef7c26e3991\" for container &ContainerMetadata{Name:kubernetes-dashboard,Attempt:2,}"
	Sep 19 23:13:47 no-preload-364197 containerd[472]: time="2025-09-19T23:13:47.246747288Z" level=info msg="CreateContainer within sandbox \"e0f9415a2add2d1cb72ecfb8b8814a682dd2353cfa15818eb3d83ef7c26e3991\" for &ContainerMetadata{Name:kubernetes-dashboard,Attempt:2,} returns container id \"18e79812d03fa9def5117aa7a3a884d976dfdc94c077b748a590d0c5307a33b4\""
	Sep 19 23:13:47 no-preload-364197 containerd[472]: time="2025-09-19T23:13:47.247407489Z" level=info msg="StartContainer for \"18e79812d03fa9def5117aa7a3a884d976dfdc94c077b748a590d0c5307a33b4\""
	Sep 19 23:13:47 no-preload-364197 containerd[472]: time="2025-09-19T23:13:47.308656259Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 23:13:47 no-preload-364197 containerd[472]: time="2025-09-19T23:13:47.313650727Z" level=info msg="StartContainer for \"18e79812d03fa9def5117aa7a3a884d976dfdc94c077b748a590d0c5307a33b4\" returns successfully"
	Sep 19 23:13:53 no-preload-364197 containerd[472]: time="2025-09-19T23:13:53.897893604Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	
	
	==> coredns [658e15ecef2dafe3d0bf9b9edb26ac278640956ddad27a4b7a3c62bc89fb2506] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:56992 - 40363 "HINFO IN 3109128270378564685.5637816105214737005. udp 57 false 512" - - 0 2.000323007s
	[ERROR] plugin/errors: 2 3109128270378564685.5637816105214737005. HINFO: read udp 10.244.0.2:38291->192.168.85.1:53: i/o timeout
	[INFO] 127.0.0.1:56564 - 29966 "HINFO IN 3109128270378564685.5637816105214737005. udp 57 false 512" - - 0 2.001044941s
	[ERROR] plugin/errors: 2 3109128270378564685.5637816105214737005. HINFO: read udp 10.244.0.2:34418->192.168.85.1:53: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] 127.0.0.1:43699 - 39336 "HINFO IN 3109128270378564685.5637816105214737005. udp 57 false 512" - - 0 2.000715331s
	[ERROR] plugin/errors: 2 3109128270378564685.5637816105214737005. HINFO: read udp 10.244.0.2:47656->192.168.85.1:53: i/o timeout
	[INFO] 127.0.0.1:40867 - 39608 "HINFO IN 3109128270378564685.5637816105214737005. udp 57 false 512" - - 0 2.001048794s
	[ERROR] plugin/errors: 2 3109128270378564685.5637816105214737005. HINFO: read udp 10.244.0.2:46436->192.168.85.1:53: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [81516948c500cddd74ea5e02f4e3d75fcaf2b7d2aef946d84a3656def8fdf90b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51151 - 25647 "HINFO IN 2174771630841354638.2989325533945903380. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062767086s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               no-preload-364197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-364197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=no-preload-364197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_11_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:11:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-364197
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:13:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:13:53 +0000   Fri, 19 Sep 2025 23:11:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:13:53 +0000   Fri, 19 Sep 2025 23:11:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:13:53 +0000   Fri, 19 Sep 2025 23:11:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:13:53 +0000   Fri, 19 Sep 2025 23:11:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-364197
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 18b5d0a888534bc6af7b0590d1485844
	  System UUID:                dddc9917-cb17-435a-a3e1-cd4a58751c59
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 coredns-66bc5c9577-xg99k                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m43s
	  kube-system                 etcd-no-preload-364197                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m49s
	  kube-system                 kindnet-89psw                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m44s
	  kube-system                 kube-apiserver-no-preload-364197              250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m49s
	  kube-system                 kube-controller-manager-no-preload-364197     200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m49s
	  kube-system                 kube-proxy-t4j4z                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  kube-system                 kube-scheduler-no-preload-364197              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m49s
	  kube-system                 metrics-server-746fcd58dc-54wcq               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vdb6s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rj7g8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 51s                kube-proxy       
	  Normal  Starting                 2m42s              kube-proxy       
	  Normal  Starting                 2m49s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m49s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m49s              kubelet          Node no-preload-364197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m49s              kubelet          Node no-preload-364197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m49s              kubelet          Node no-preload-364197 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m44s              node-controller  Node no-preload-364197 event: Registered Node no-preload-364197 in Controller
	  Normal  NodeHasNoDiskPressure    96s (x8 over 96s)  kubelet          Node no-preload-364197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  96s (x8 over 96s)  kubelet          Node no-preload-364197 status is now: NodeHasSufficientMemory
	  Normal  Starting                 96s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     96s (x7 over 96s)  kubelet          Node no-preload-364197 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           90s                node-controller  Node no-preload-364197 event: Registered Node no-preload-364197 in Controller
	  Normal  Starting                 2s                 kubelet          Starting kubelet.
	  Normal  Starting                 2s                 kubelet          Starting kubelet.
	  Normal  Starting                 1s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  1s                 kubelet          Node no-preload-364197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    1s                 kubelet          Node no-preload-364197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     1s                 kubelet          Node no-preload-364197 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.995983] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.504855] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500830] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.994952] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.505945] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501588] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.993278] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.507907] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500995] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.992251] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.509112] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501500] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989799] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.510643] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501653] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989383] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.511830] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500482] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989056] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.512088] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500241] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501649] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501291] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501422] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[Sep19 23:13] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	
	
	==> etcd [22f8dfd9e25c5e22c40757e1e8d4aca05929cb7e5bacc483cd10eca2a6cbaf53] <==
	{"level":"warn","ts":"2025-09-19T23:11:02.022915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.033669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.047563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.055349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.066708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.077867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.085100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.095050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.104401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.111576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.119779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.127986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.135874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.143987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.151894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.159535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.167785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.174810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.187644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.200143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.209910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.216841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.225643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.274234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60420","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T23:11:10.491642Z","caller":"traceutil/trace.go:172","msg":"trace[585345196] transaction","detail":"{read_only:false; response_revision:301; number_of_response:1; }","duration":"132.970471ms","start":"2025-09-19T23:11:10.358639Z","end":"2025-09-19T23:11:10.491610Z","steps":["trace[585345196] 'process raft request'  (duration: 132.788165ms)"],"step_count":1}
	
	
	==> etcd [43f594ae64f30adcfece56bff232d9a5d66d10b57aa5eb81dcd23096c4d9fefe] <==
	{"level":"warn","ts":"2025-09-19T23:12:20.419258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.427563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.434267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.449875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.457062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.464584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.472378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.479128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.485710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.493422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.501409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.508450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.514933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.522123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.529109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.536591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.549358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.557772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:51.745932Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.388145ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722595826412087472 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-sbo7gcm2fodinrvdtgquuedjoq\" mod_revision:692 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-sbo7gcm2fodinrvdtgquuedjoq\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-sbo7gcm2fodinrvdtgquuedjoq\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:12:51.746065Z","caller":"traceutil/trace.go:172","msg":"trace[735937185] transaction","detail":"{read_only:false; response_revision:701; number_of_response:1; }","duration":"144.569588ms","start":"2025-09-19T23:12:51.601479Z","end":"2025-09-19T23:12:51.746048Z","steps":["trace[735937185] 'process raft request'  (duration: 23.342875ms)","trace[735937185] 'compare'  (duration: 120.100805ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:12:52.154410Z","caller":"traceutil/trace.go:172","msg":"trace[275387585] transaction","detail":"{read_only:false; response_revision:702; number_of_response:1; }","duration":"102.189119ms","start":"2025-09-19T23:12:52.052200Z","end":"2025-09-19T23:12:52.154389Z","steps":["trace[275387585] 'process raft request'  (duration: 102.037538ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:12:52.658614Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.154537ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722595826412087483 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:06ed99643ffce8ba>","response":"size:40"}
	{"level":"warn","ts":"2025-09-19T23:13:12.807135Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.413548ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:474"}
	{"level":"info","ts":"2025-09-19T23:13:12.807245Z","caller":"traceutil/trace.go:172","msg":"trace[1679246197] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:752; }","duration":"182.593348ms","start":"2025-09-19T23:13:12.624636Z","end":"2025-09-19T23:13:12.807230Z","steps":["trace[1679246197] 'range keys from in-memory index tree'  (duration: 182.267088ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:13:22.201794Z","caller":"traceutil/trace.go:172","msg":"trace[879742454] transaction","detail":"{read_only:false; response_revision:759; number_of_response:1; }","duration":"122.917396ms","start":"2025-09-19T23:13:22.078856Z","end":"2025-09-19T23:13:22.201774Z","steps":["trace[879742454] 'process raft request'  (duration: 122.772171ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:13:54 up  1:56,  0 users,  load average: 5.01, 3.90, 2.42
	Linux no-preload-364197 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [05a104972ade2c044b7ecfd589a3e6279429a5573d073092fa3575eb43f33fb6] <==
	I0919 23:11:15.104537       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0919 23:11:15.104827       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0919 23:11:15.105011       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:11:15.105033       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:11:15.105056       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:11:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:11:15.405038       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:11:15.405078       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:11:15.405089       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:11:15.405909       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0919 23:11:15.706466       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:11:15.706551       1 metrics.go:72] Registering metrics
	I0919 23:11:15.770426       1 controller.go:711] "Syncing nftables rules"
	I0919 23:11:25.408283       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:11:25.408356       1 main.go:301] handling current node
	I0919 23:11:35.414272       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:11:35.414304       1 main.go:301] handling current node
	I0919 23:11:45.409230       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:11:45.409262       1 main.go:301] handling current node
	I0919 23:11:55.407306       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:11:55.407358       1 main.go:301] handling current node
	
	
	==> kindnet [70e9a93e23676ba38f41c165c77120324ad986079be2f1dda89a089c06e82ec7] <==
	I0919 23:12:22.493733       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:12:22.493752       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:12:22.493772       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:12:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:12:22.749276       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:12:22.793094       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:12:22.793255       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:12:22.793491       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0919 23:12:52.794505       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0919 23:12:52.794505       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0919 23:12:52.794556       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0919 23:12:52.794624       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0919 23:13:23.957643       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0919 23:13:23.968096       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0919 23:13:23.970968       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0919 23:13:24.173171       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I0919 23:13:26.893920       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:13:26.893957       1 metrics.go:72] Registering metrics
	I0919 23:13:26.894022       1 controller.go:711] "Syncing nftables rules"
	I0919 23:13:32.749544       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:13:32.749603       1 main.go:301] handling current node
	I0919 23:13:42.754230       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:13:42.754277       1 main.go:301] handling current node
	I0919 23:13:52.749266       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:13:52.749294       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4f4b7cb19d71d424c1f0b8eed4886e134667affbc967fc67d6ab5091a5ec5afc] <==
	I0919 23:11:10.760678       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 23:11:10.811774       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 23:11:10.817215       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 23:11:10.860056       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 23:11:10.860056       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E0919 23:11:58.296051       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:55532: use of closed network connection
	I0919 23:11:59.083820       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 23:11:59.089604       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:11:59.089675       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0919 23:11:59.089795       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0919 23:11:59.174580       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.96.51.251"}
	W0919 23:11:59.185075       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:11:59.185137       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0919 23:11:59.188366       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	W0919 23:11:59.192847       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:11:59.192908       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [6bc33bc1397bee9ae4cddae4044808fcc50a7a9cf5d158ec3fa6eb80e16e52ab] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:12:22.109568       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 23:12:24.482118       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 23:12:24.780094       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 23:12:24.930398       1 controller.go:667] quota admission added evaluator for: endpoints
	I0919 23:12:24.930398       1 controller.go:667] quota admission added evaluator for: endpoints
	I0919 23:12:25.035486       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	E0919 23:13:21.000684       1 dynamic_cafile_content.go:170] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:13:21.000906       1 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:13:21.002020       1 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:13:21.002185       1 dynamic_cafile_content.go:170] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:13:21.002209       1 dynamic_cafile_content.go:170] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	W0919 23:13:22.109349       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:13:22.109405       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 23:13:22.109422       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 23:13:22.110653       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:13:22.110935       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:13:22.110963       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 23:13:32.519551       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 23:13:49.409412       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [a725a4bde25d4c27402431bef26b0b2528b9ee8ce86de9668a3bc6c57218ae97] <==
	I0919 23:11:10.057104       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 23:11:10.057526       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 23:11:10.057666       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 23:11:10.058953       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 23:11:10.058995       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 23:11:10.061668       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 23:11:10.064887       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 23:11:10.066080       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0919 23:11:10.066140       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0919 23:11:10.066247       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0919 23:11:10.066263       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0919 23:11:10.066270       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0919 23:11:10.069513       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:11:10.075840       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-364197" podCIDRs=["10.244.0.0/24"]
	I0919 23:11:10.075887       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 23:11:10.077851       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 23:11:10.088352       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:11:10.090511       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 23:11:10.098863       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0919 23:11:10.101303       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 23:11:10.101309       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 23:11:10.102452       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 23:11:10.110033       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0919 23:11:10.110532       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0919 23:11:10.114488       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [cefe6d56503ab49c05eeb71c647a5070bf2298f7e5024960e954a0fa1becced9] <==
	I0919 23:12:24.416697       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 23:12:24.419047       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 23:12:24.425953       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 23:12:24.426913       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 23:12:24.426975       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 23:12:24.426974       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0919 23:12:24.426990       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0919 23:12:24.427030       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0919 23:12:24.427114       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:12:24.427130       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 23:12:24.427136       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 23:12:24.431910       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:12:24.433192       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:12:24.442536       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 23:12:24.448291       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0919 23:12:54.438195       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:12:54.458746       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:13:23.049966       1 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:13:23.049965       1 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:13:23.049999       1 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:13:23.050417       1 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:13:24.445094       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:13:24.469586       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:13:54.452497       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:13:54.483957       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [02daa0a76ef2fe82a3406571b3c039ffa479d47918842e391ad088b0d5deba09] <==
	E0919 23:12:36.414777       1 run.go:72] "command failed" err="failed complete: too many open files"
	
	
	==> kube-proxy [1c4ad1fabb59c4e57a9b67d110a8968a1a0978b942894c62df9d88aa2fdda568] <==
	I0919 23:13:02.425116       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:13:02.490127       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:13:02.591097       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:13:02.591147       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0919 23:13:02.591271       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:13:02.632139       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:13:02.632243       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:13:02.642997       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:13:02.643690       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:13:02.643744       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:13:02.645359       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:13:02.645377       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:13:02.645444       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:13:02.645455       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:13:02.645494       1 config.go:200] "Starting service config controller"
	I0919 23:13:02.645503       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:13:02.645742       1 config.go:309] "Starting node config controller"
	I0919 23:13:02.645756       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:13:02.746093       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 23:13:02.746189       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 23:13:02.746739       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:13:02.746782       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4033582ceab6ba6b5c4f950f6070e6fc0d1d797c421c10c1de6e06129df50b54] <==
	E0919 23:11:02.941574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 23:11:02.941640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0919 23:11:02.942100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 23:11:02.943275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 23:11:02.943468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 23:11:02.943586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 23:11:02.942276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 23:11:02.943748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 23:11:02.943835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 23:11:02.944059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 23:11:02.943599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 23:11:02.945060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 23:11:03.804349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 23:11:03.812989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 23:11:03.836582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0919 23:11:03.886936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 23:11:03.907211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 23:11:04.034583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 23:11:04.045992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 23:11:04.118751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 23:11:04.139566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 23:11:04.202019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 23:11:04.212421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 23:11:04.252425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I0919 23:11:06.135259       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [9cfac78cf12303fd4f547a1f839aba05a7b01f153dcae55f2e060f69f98c8e8d] <==
	I0919 23:12:19.500263       1 serving.go:386] Generated self-signed cert in-memory
	W0919 23:12:21.023505       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 23:12:21.023536       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 23:12:21.023548       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 23:12:21.023558       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 23:12:21.057122       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 23:12:21.057189       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:12:21.060208       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:12:21.060284       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:12:21.060670       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 23:12:21.061093       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 23:12:21.160477       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 23:13:53 no-preload-364197 kubelet[3246]: I0919 23:13:53.897329    3246 kubelet_node_status.go:124] "Node was previously registered" node="no-preload-364197"
	Sep 19 23:13:53 no-preload-364197 kubelet[3246]: I0919 23:13:53.897436    3246 kubelet_node_status.go:78] "Successfully registered node" node="no-preload-364197"
	Sep 19 23:13:53 no-preload-364197 kubelet[3246]: I0919 23:13:53.897473    3246 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 23:13:53 no-preload-364197 kubelet[3246]: I0919 23:13:53.898219    3246 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.653255    3246 apiserver.go:52] "Watching apiserver"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.665557    3246 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.678464    3246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cc8840e-ebe9-4d2e-8e31-c00341a52c4a-lib-modules\") pod \"kube-proxy-t4j4z\" (UID: \"6cc8840e-ebe9-4d2e-8e31-c00341a52c4a\") " pod="kube-system/kube-proxy-t4j4z"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.678551    3246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/912ae03a-d62e-4d7d-8184-e16295d5ab7d-lib-modules\") pod \"kindnet-89psw\" (UID: \"912ae03a-d62e-4d7d-8184-e16295d5ab7d\") " pod="kube-system/kindnet-89psw"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.678906    3246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/912ae03a-d62e-4d7d-8184-e16295d5ab7d-cni-cfg\") pod \"kindnet-89psw\" (UID: \"912ae03a-d62e-4d7d-8184-e16295d5ab7d\") " pod="kube-system/kindnet-89psw"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.679390    3246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e3539cd4-a4d0-4bd2-a0c5-34c7cc316493-tmp\") pod \"storage-provisioner\" (UID: \"e3539cd4-a4d0-4bd2-a0c5-34c7cc316493\") " pod="kube-system/storage-provisioner"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.679447    3246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/912ae03a-d62e-4d7d-8184-e16295d5ab7d-xtables-lock\") pod \"kindnet-89psw\" (UID: \"912ae03a-d62e-4d7d-8184-e16295d5ab7d\") " pod="kube-system/kindnet-89psw"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.679866    3246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cc8840e-ebe9-4d2e-8e31-c00341a52c4a-xtables-lock\") pod \"kube-proxy-t4j4z\" (UID: \"6cc8840e-ebe9-4d2e-8e31-c00341a52c4a\") " pod="kube-system/kube-proxy-t4j4z"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.748207    3246 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-no-preload-364197"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.749177    3246 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-no-preload-364197"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.749526    3246 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-no-preload-364197"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.750198    3246 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-no-preload-364197"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: E0919 23:13:54.762219    3246 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-no-preload-364197\" already exists" pod="kube-system/kube-controller-manager-no-preload-364197"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: E0919 23:13:54.769563    3246 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-no-preload-364197\" already exists" pod="kube-system/kube-scheduler-no-preload-364197"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: E0919 23:13:54.771976    3246 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-no-preload-364197\" already exists" pod="kube-system/kube-apiserver-no-preload-364197"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: E0919 23:13:54.773581    3246 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-no-preload-364197\" already exists" pod="kube-system/etcd-no-preload-364197"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.959679    3246 scope.go:117] "RemoveContainer" containerID="b9a3ddd53c0982f2b6456f555d5b0736bed2fe411350d742099824f5f0e34944"
	Sep 19 23:13:55 no-preload-364197 kubelet[3246]: E0919 23:13:55.019349    3246 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 19 23:13:55 no-preload-364197 kubelet[3246]: E0919 23:13:55.019554    3246 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 19 23:13:55 no-preload-364197 kubelet[3246]: E0919 23:13:55.019800    3246 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-54wcq_kube-system(8b7f16ad-5a72-473e-90dc-6ad786e6e753): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" logger="UnhandledError"
	Sep 19 23:13:55 no-preload-364197 kubelet[3246]: E0919 23:13:55.020000    3246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-54wcq" podUID="8b7f16ad-5a72-473e-90dc-6ad786e6e753"
	
	
	==> kubernetes-dashboard [18e79812d03fa9def5117aa7a3a884d976dfdc94c077b748a590d0c5307a33b4] <==
	2025/09/19 23:13:47 Using namespace: kubernetes-dashboard
	2025/09/19 23:13:47 Using in-cluster config to connect to apiserver
	2025/09/19 23:13:47 Using secret token for csrf signing
	2025/09/19 23:13:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 23:13:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/19 23:13:47 Successful initial request to the apiserver, version: v1.34.0
	2025/09/19 23:13:47 Generating JWE encryption key
	2025/09/19 23:13:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/19 23:13:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/19 23:13:47 Initializing JWE encryption key from synchronized object
	2025/09/19 23:13:47 Creating in-cluster Sidecar client
	2025/09/19 23:13:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:13:47 Serving insecurely on HTTP port: 9090
	2025/09/19 23:13:47 Starting overwatch
	
	
	==> kubernetes-dashboard [265b08adaae7361b442c8dbb2cece3f4be85d7eeb4a1035bce7e3fc80dfe2381] <==
	2025/09/19 23:13:00 Using namespace: kubernetes-dashboard
	2025/09/19 23:13:00 Using in-cluster config to connect to apiserver
	2025/09/19 23:13:00 Using secret token for csrf signing
	2025/09/19 23:13:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 23:13:00 Starting overwatch
	panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: i/o timeout
	
	goroutine 1 [running]:
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00071fae8)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x30e
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc0003a6100)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:527 +0x94
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0x19aba3a?)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:495 +0x32
	github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:594
	main.main()
		/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:96 +0x1cf
	
	
	==> storage-provisioner [84fd26f46a650832d3eb69def25786df1f79a73ba8e2bd5c0865f96ca1de4b47] <==
	W0919 23:13:29.783024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:31.786366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:31.824817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:33.828722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:33.833423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:35.837449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:35.842313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:37.846113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:37.851511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:39.854626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:39.858791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:41.862516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:41.867222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:43.870358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:43.874583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:45.879778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:45.884429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:47.888303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:47.892616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:49.895851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:49.901215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:51.904950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:51.912664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:53.916617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:53.921735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b99df30ad600675c9bcc7e13b3281021bfd6a2b7e8368cf5d4c7ec80ee03974a] <==
	I0919 23:12:21.859510       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 23:12:51.862764       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-364197 -n no-preload-364197
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-364197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-54wcq
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-364197 describe pod metrics-server-746fcd58dc-54wcq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-364197 describe pod metrics-server-746fcd58dc-54wcq: exit status 1 (97.50601ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-54wcq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-364197 describe pod metrics-server-746fcd58dc-54wcq: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-364197
helpers_test.go:243: (dbg) docker inspect no-preload-364197:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "29ec8599971bca540130714385d35c1912e3f64ae96dadf6085a0ebd160bd352",
	        "Created": "2025-09-19T23:10:34.49485581Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286872,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T23:12:11.949430547Z",
	            "FinishedAt": "2025-09-19T23:12:10.984909383Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/29ec8599971bca540130714385d35c1912e3f64ae96dadf6085a0ebd160bd352/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/29ec8599971bca540130714385d35c1912e3f64ae96dadf6085a0ebd160bd352/hostname",
	        "HostsPath": "/var/lib/docker/containers/29ec8599971bca540130714385d35c1912e3f64ae96dadf6085a0ebd160bd352/hosts",
	        "LogPath": "/var/lib/docker/containers/29ec8599971bca540130714385d35c1912e3f64ae96dadf6085a0ebd160bd352/29ec8599971bca540130714385d35c1912e3f64ae96dadf6085a0ebd160bd352-json.log",
	        "Name": "/no-preload-364197",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-364197:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-364197",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "29ec8599971bca540130714385d35c1912e3f64ae96dadf6085a0ebd160bd352",
	                "LowerDir": "/var/lib/docker/overlay2/b3feb910e66685d1f0777cc789221b1f5d4f7f0332bc96a2a55a77144d4aa72a-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3feb910e66685d1f0777cc789221b1f5d4f7f0332bc96a2a55a77144d4aa72a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3feb910e66685d1f0777cc789221b1f5d4f7f0332bc96a2a55a77144d4aa72a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3feb910e66685d1f0777cc789221b1f5d4f7f0332bc96a2a55a77144d4aa72a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-364197",
	                "Source": "/var/lib/docker/volumes/no-preload-364197/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-364197",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-364197",
	                "name.minikube.sigs.k8s.io": "no-preload-364197",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "20ec0619219617a12e62eabacfcb49e9df8e2245240fe7e04185e99ea01a00ae",
	            "SandboxKey": "/var/run/docker/netns/20ec06192196",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-364197": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:c5:56:e8:e1:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "76962f0867a93383c165f73f4cfd146d75602db376a54d44233a14e1bb615aac",
	                    "EndpointID": "1ebc872fb88cb4c12fb74342fc521e14de6cc9bd0e41de3ce64ef033532d9820",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-364197",
	                        "29ec8599971b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-364197 -n no-preload-364197
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-364197 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-364197 logs -n 25: (2.08569713s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable metrics-server -p no-preload-364197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:11 UTC │ 19 Sep 25 23:11 UTC │
	│ stop    │ -p no-preload-364197 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:11 UTC │ 19 Sep 25 23:12 UTC │
	│ addons  │ enable dashboard -p no-preload-364197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ start   │ -p no-preload-364197 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-403962 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ stop    │ -p embed-certs-403962 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ image   │ old-k8s-version-757990 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ pause   │ -p old-k8s-version-757990 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ unpause │ -p old-k8s-version-757990 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ delete  │ -p old-k8s-version-757990                                                                                                                                                                                                                           │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ delete  │ -p old-k8s-version-757990                                                                                                                                                                                                                           │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ delete  │ -p disable-driver-mounts-606373                                                                                                                                                                                                                     │ disable-driver-mounts-606373 │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ start   │ -p default-k8s-diff-port-149888 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-149888 │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-403962 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ start   │ -p embed-certs-403962 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:13 UTC │
	│ start   │ -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-430859    │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-430859    │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ delete  │ -p kubernetes-upgrade-430859                                                                                                                                                                                                                        │ kubernetes-upgrade-430859    │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ start   │ -p newest-cni-312465 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-312465            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │                     │
	│ image   │ no-preload-364197 image list --format=json                                                                                                                                                                                                          │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ pause   │ -p no-preload-364197 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ unpause │ -p no-preload-364197 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ image   │ embed-certs-403962 image list --format=json                                                                                                                                                                                                         │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ pause   │ -p embed-certs-403962 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ unpause │ -p embed-certs-403962 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:13:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:13:27.238593  304826 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:13:27.238920  304826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:13:27.238933  304826 out.go:374] Setting ErrFile to fd 2...
	I0919 23:13:27.238939  304826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:13:27.239301  304826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 23:13:27.240254  304826 out.go:368] Setting JSON to false
	I0919 23:13:27.242293  304826 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6951,"bootTime":1758316656,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:13:27.242391  304826 start.go:140] virtualization: kvm guest
	I0919 23:13:27.245079  304826 out.go:179] * [newest-cni-312465] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:13:27.247014  304826 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:13:27.247038  304826 notify.go:220] Checking for updates...
	I0919 23:13:27.250017  304826 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:13:27.251473  304826 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:13:27.253044  304826 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 23:13:27.254720  304826 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:13:27.256145  304826 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:13:27.258280  304826 config.go:182] Loaded profile config "default-k8s-diff-port-149888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:27.258431  304826 config.go:182] Loaded profile config "embed-certs-403962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:27.258597  304826 config.go:182] Loaded profile config "no-preload-364197": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:27.258738  304826 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:13:27.288883  304826 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:13:27.288975  304826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:13:27.365354  304826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 23:13:27.353196914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:13:27.365506  304826 docker.go:318] overlay module found
	I0919 23:13:27.367763  304826 out.go:179] * Using the docker driver based on user configuration
	I0919 23:13:27.369311  304826 start.go:304] selected driver: docker
	I0919 23:13:27.369334  304826 start.go:918] validating driver "docker" against <nil>
	I0919 23:13:27.369348  304826 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:13:27.370111  304826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:13:27.453927  304826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 23:13:27.442609844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:13:27.454140  304826 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W0919 23:13:27.454193  304826 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0919 23:13:27.454507  304826 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0919 23:13:27.457066  304826 out.go:179] * Using Docker driver with root privileges
	I0919 23:13:27.458665  304826 cni.go:84] Creating CNI manager for ""
	I0919 23:13:27.458745  304826 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:13:27.458755  304826 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 23:13:27.458835  304826 start.go:348] cluster config:
	{Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:13:27.460214  304826 out.go:179] * Starting "newest-cni-312465" primary control-plane node in "newest-cni-312465" cluster
	I0919 23:13:27.461705  304826 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 23:13:27.463479  304826 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:13:27.464969  304826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:13:27.465036  304826 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 23:13:27.465066  304826 cache.go:58] Caching tarball of preloaded images
	I0919 23:13:27.465145  304826 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:13:27.465211  304826 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 23:13:27.465224  304826 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 23:13:27.465373  304826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/config.json ...
	I0919 23:13:27.465402  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/config.json: {Name:mkbe0b2096af0dfcb672d8d5ff02d95192e51311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:27.491881  304826 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:13:27.491906  304826 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:13:27.491929  304826 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:13:27.491965  304826 start.go:360] acquireMachinesLock for newest-cni-312465: {Name:mkdaed0f91b48ccb0806887f4c48e7b6207e9286 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:13:27.492089  304826 start.go:364] duration metric: took 98.144µs to acquireMachinesLock for "newest-cni-312465"
	I0919 23:13:27.492120  304826 start.go:93] Provisioning new machine with config: &{Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:13:27.492213  304826 start.go:125] createHost starting for "" (driver="docker")
	I0919 23:13:25.986611  294587 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 22.501936199s
	I0919 23:13:25.991147  294587 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:13:25.991278  294587 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I0919 23:13:25.991386  294587 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:13:25.991522  294587 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W0919 23:13:25.316055  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:27.322716  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:27.416884  286555 pod_ready.go:104] pod "coredns-66bc5c9577-xg99k" is not "Ready", error: <nil>
	W0919 23:13:29.942623  286555 pod_ready.go:104] pod "coredns-66bc5c9577-xg99k" is not "Ready", error: <nil>
	I0919 23:13:27.494730  304826 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 23:13:27.494955  304826 start.go:159] libmachine.API.Create for "newest-cni-312465" (driver="docker")
	I0919 23:13:27.494995  304826 client.go:168] LocalClient.Create starting
	I0919 23:13:27.495095  304826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 23:13:27.495131  304826 main.go:141] libmachine: Decoding PEM data...
	I0919 23:13:27.495171  304826 main.go:141] libmachine: Parsing certificate...
	I0919 23:13:27.495239  304826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 23:13:27.495270  304826 main.go:141] libmachine: Decoding PEM data...
	I0919 23:13:27.495286  304826 main.go:141] libmachine: Parsing certificate...
	I0919 23:13:27.495751  304826 cli_runner.go:164] Run: docker network inspect newest-cni-312465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 23:13:27.519239  304826 cli_runner.go:211] docker network inspect newest-cni-312465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 23:13:27.519336  304826 network_create.go:284] running [docker network inspect newest-cni-312465] to gather additional debugging logs...
	I0919 23:13:27.519357  304826 cli_runner.go:164] Run: docker network inspect newest-cni-312465
	W0919 23:13:27.542030  304826 cli_runner.go:211] docker network inspect newest-cni-312465 returned with exit code 1
	I0919 23:13:27.542062  304826 network_create.go:287] error running [docker network inspect newest-cni-312465]: docker network inspect newest-cni-312465: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-312465 not found
	I0919 23:13:27.542075  304826 network_create.go:289] output of [docker network inspect newest-cni-312465]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-312465 not found
	
	** /stderr **
	I0919 23:13:27.542219  304826 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:13:27.573077  304826 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-465af21e2d8d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:b5:e0:20:10:48} reservation:<nil>}
	I0919 23:13:27.574029  304826 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd9dd989b948 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:ff:32:51:a1:28} reservation:<nil>}
	I0919 23:13:27.575058  304826 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d5cd7c460be9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:d8:0f:d3:49:b9} reservation:<nil>}
	I0919 23:13:27.576219  304826 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-eeb244b5b4d9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:19:45:7a:f8:43} reservation:<nil>}
	I0919 23:13:27.577101  304826 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-76962f0867a9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ee:d8:43:3c:3c:e2} reservation:<nil>}
	I0919 23:13:27.578259  304826 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cf1dc0}
	I0919 23:13:27.578290  304826 network_create.go:124] attempt to create docker network newest-cni-312465 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0919 23:13:27.578338  304826 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-312465 newest-cni-312465
	I0919 23:13:27.664074  304826 network_create.go:108] docker network newest-cni-312465 192.168.94.0/24 created
	I0919 23:13:27.664108  304826 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-312465" container
	I0919 23:13:27.664204  304826 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 23:13:27.686848  304826 cli_runner.go:164] Run: docker volume create newest-cni-312465 --label name.minikube.sigs.k8s.io=newest-cni-312465 --label created_by.minikube.sigs.k8s.io=true
	I0919 23:13:27.711517  304826 oci.go:103] Successfully created a docker volume newest-cni-312465
	I0919 23:13:27.711624  304826 cli_runner.go:164] Run: docker run --rm --name newest-cni-312465-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-312465 --entrypoint /usr/bin/test -v newest-cni-312465:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 23:13:28.191316  304826 oci.go:107] Successfully prepared a docker volume newest-cni-312465
	I0919 23:13:28.191366  304826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:13:28.191389  304826 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 23:13:28.191481  304826 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-312465:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 23:13:32.076573  304826 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-312465:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.885033462s)
	I0919 23:13:32.076612  304826 kic.go:203] duration metric: took 3.885218568s to extract preloaded images to volume ...
	W0919 23:13:32.076710  304826 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 23:13:32.076743  304826 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 23:13:32.076794  304826 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 23:13:32.149761  304826 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-312465 --name newest-cni-312465 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-312465 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-312465 --network newest-cni-312465 --ip 192.168.94.2 --volume newest-cni-312465:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 23:13:28.139399  294587 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.148131492s
	I0919 23:13:28.449976  294587 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.458741458s
	I0919 23:13:32.493086  294587 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.501778199s
	I0919 23:13:32.510785  294587 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:13:32.524242  294587 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:13:32.539521  294587 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:13:32.539729  294587 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-149888 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:13:32.551224  294587 kubeadm.go:310] [bootstrap-token] Using token: n81jvw.nat4ajoeag176u3n
	I0919 23:13:32.553385  294587 out.go:252]   - Configuring RBAC rules ...
	I0919 23:13:32.553522  294587 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:13:32.557811  294587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:13:32.567024  294587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:13:32.570531  294587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:13:32.576653  294587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:13:32.580237  294587 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:13:32.901145  294587 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:13:33.324739  294587 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:13:33.900632  294587 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:13:33.901573  294587 kubeadm.go:310] 
	I0919 23:13:33.901667  294587 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:13:33.901677  294587 kubeadm.go:310] 
	I0919 23:13:33.901751  294587 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:13:33.901758  294587 kubeadm.go:310] 
	I0919 23:13:33.901777  294587 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:13:33.901831  294587 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:13:33.901895  294587 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:13:33.901902  294587 kubeadm.go:310] 
	I0919 23:13:33.901944  294587 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:13:33.901974  294587 kubeadm.go:310] 
	I0919 23:13:33.902054  294587 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:13:33.902064  294587 kubeadm.go:310] 
	I0919 23:13:33.902143  294587 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:13:33.902266  294587 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:13:33.902331  294587 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:13:33.902339  294587 kubeadm.go:310] 
	I0919 23:13:33.902406  294587 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:13:33.902479  294587 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:13:33.902485  294587 kubeadm.go:310] 
	I0919 23:13:33.902551  294587 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token n81jvw.nat4ajoeag176u3n \
	I0919 23:13:33.902635  294587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 \
	I0919 23:13:33.902655  294587 kubeadm.go:310] 	--control-plane 
	I0919 23:13:33.902661  294587 kubeadm.go:310] 
	I0919 23:13:33.902730  294587 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:13:33.902737  294587 kubeadm.go:310] 
	I0919 23:13:33.902801  294587 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token n81jvw.nat4ajoeag176u3n \
	I0919 23:13:33.902883  294587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 
	I0919 23:13:33.906239  294587 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:13:33.906372  294587 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:13:33.906402  294587 cni.go:84] Creating CNI manager for ""
	I0919 23:13:33.906416  294587 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:13:33.908216  294587 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W0919 23:13:29.819116  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:31.826948  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:34.316941  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	I0919 23:13:32.476430  304826 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Running}}
	I0919 23:13:32.500104  304826 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:13:32.523104  304826 cli_runner.go:164] Run: docker exec newest-cni-312465 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:13:32.578263  304826 oci.go:144] the created container "newest-cni-312465" has a running status.
	I0919 23:13:32.578295  304826 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa...
	I0919 23:13:32.976039  304826 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:13:33.009077  304826 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:13:33.031547  304826 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:13:33.031565  304826 kic_runner.go:114] Args: [docker exec --privileged newest-cni-312465 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:13:33.092603  304826 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:13:33.115283  304826 machine.go:93] provisionDockerMachine start ...
	I0919 23:13:33.115380  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:33.139784  304826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:13:33.140058  304826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I0919 23:13:33.140073  304826 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:13:33.290427  304826 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-312465
	
	I0919 23:13:33.290458  304826 ubuntu.go:182] provisioning hostname "newest-cni-312465"
	I0919 23:13:33.290507  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:33.316275  304826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:13:33.316511  304826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I0919 23:13:33.316526  304826 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-312465 && echo "newest-cni-312465" | sudo tee /etc/hostname
	I0919 23:13:33.472768  304826 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-312465
	
	I0919 23:13:33.472864  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:33.494111  304826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:13:33.494398  304826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I0919 23:13:33.494430  304826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-312465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-312465/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-312465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:13:33.635421  304826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:13:33.635451  304826 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 23:13:33.635494  304826 ubuntu.go:190] setting up certificates
	I0919 23:13:33.635517  304826 provision.go:84] configureAuth start
	I0919 23:13:33.635574  304826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:13:33.655878  304826 provision.go:143] copyHostCerts
	I0919 23:13:33.655961  304826 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 23:13:33.655977  304826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 23:13:33.656058  304826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 23:13:33.656241  304826 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 23:13:33.656255  304826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 23:13:33.656304  304826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 23:13:33.656405  304826 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 23:13:33.656415  304826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 23:13:33.656457  304826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 23:13:33.656554  304826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.newest-cni-312465 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-312465]
	I0919 23:13:34.255292  304826 provision.go:177] copyRemoteCerts
	I0919 23:13:34.255368  304826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:13:34.255413  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.284316  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:34.387988  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 23:13:34.419504  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0919 23:13:34.448496  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:13:34.475661  304826 provision.go:87] duration metric: took 840.126723ms to configureAuth
	I0919 23:13:34.475694  304826 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:13:34.475872  304826 config.go:182] Loaded profile config "newest-cni-312465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:34.475881  304826 machine.go:96] duration metric: took 1.360576611s to provisionDockerMachine
	I0919 23:13:34.475891  304826 client.go:171] duration metric: took 6.980885128s to LocalClient.Create
	I0919 23:13:34.475913  304826 start.go:167] duration metric: took 6.980958258s to libmachine.API.Create "newest-cni-312465"
	I0919 23:13:34.475926  304826 start.go:293] postStartSetup for "newest-cni-312465" (driver="docker")
	I0919 23:13:34.475937  304826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:13:34.475995  304826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:13:34.476029  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.496668  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:34.598095  304826 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:13:34.602045  304826 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:13:34.602091  304826 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:13:34.602104  304826 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:13:34.602111  304826 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:13:34.602121  304826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 23:13:34.602190  304826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 23:13:34.602281  304826 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 23:13:34.602369  304826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:13:34.612660  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:13:34.643262  304826 start.go:296] duration metric: took 167.32169ms for postStartSetup
	I0919 23:13:34.643684  304826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:13:34.663272  304826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/config.json ...
	I0919 23:13:34.663583  304826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:13:34.663633  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.683961  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:34.779205  304826 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:13:34.785070  304826 start.go:128] duration metric: took 7.292838847s to createHost
	I0919 23:13:34.785099  304826 start.go:83] releasing machines lock for "newest-cni-312465", held for 7.292995602s
	I0919 23:13:34.785189  304826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:13:34.807464  304826 ssh_runner.go:195] Run: cat /version.json
	I0919 23:13:34.807503  304826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:13:34.807575  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.807583  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.829219  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:34.829637  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:35.008352  304826 ssh_runner.go:195] Run: systemctl --version
	I0919 23:13:35.013908  304826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:13:35.019269  304826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:13:35.055596  304826 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:13:35.055680  304826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:13:35.090798  304826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:13:35.090825  304826 start.go:495] detecting cgroup driver to use...
	I0919 23:13:35.090862  304826 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:13:35.090925  304826 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 23:13:35.106670  304826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:13:35.120167  304826 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:13:35.120229  304826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:13:35.136229  304826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:13:35.152080  304826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:13:35.229432  304826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:13:35.314675  304826 docker.go:234] disabling docker service ...
	I0919 23:13:35.314746  304826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:13:35.336969  304826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:13:35.352061  304826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:13:35.433841  304826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:13:35.511892  304826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:13:35.525179  304826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:13:35.544848  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:13:35.558556  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:13:35.570787  304826 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:13:35.570874  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:13:35.583714  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:13:35.596563  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:13:35.608811  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:13:35.621274  304826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:13:35.632671  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:13:35.646560  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:13:35.659112  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:13:35.671491  304826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:13:35.681987  304826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:13:35.693319  304826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:13:35.765943  304826 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:13:35.900474  304826 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 23:13:35.900553  304826 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 23:13:35.904775  304826 start.go:563] Will wait 60s for crictl version
	I0919 23:13:35.904838  304826 ssh_runner.go:195] Run: which crictl
	I0919 23:13:35.908969  304826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:13:35.948499  304826 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 23:13:35.948718  304826 ssh_runner.go:195] Run: containerd --version
	I0919 23:13:35.976417  304826 ssh_runner.go:195] Run: containerd --version
	I0919 23:13:36.005950  304826 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 23:13:36.007659  304826 cli_runner.go:164] Run: docker network inspect newest-cni-312465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:13:36.028772  304826 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0919 23:13:36.033878  304826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:13:36.053802  304826 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W0919 23:13:31.971038  286555 pod_ready.go:104] pod "coredns-66bc5c9577-xg99k" is not "Ready", error: <nil>
	W0919 23:13:34.412827  286555 pod_ready.go:104] pod "coredns-66bc5c9577-xg99k" is not "Ready", error: <nil>
	I0919 23:13:36.412824  286555 pod_ready.go:94] pod "coredns-66bc5c9577-xg99k" is "Ready"
	I0919 23:13:36.412859  286555 pod_ready.go:86] duration metric: took 1m14.00590752s for pod "coredns-66bc5c9577-xg99k" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.415705  286555 pod_ready.go:83] waiting for pod "etcd-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.420550  286555 pod_ready.go:94] pod "etcd-no-preload-364197" is "Ready"
	I0919 23:13:36.420580  286555 pod_ready.go:86] duration metric: took 4.848977ms for pod "etcd-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.423284  286555 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.428673  286555 pod_ready.go:94] pod "kube-apiserver-no-preload-364197" is "Ready"
	I0919 23:13:36.428703  286555 pod_ready.go:86] duration metric: took 5.394829ms for pod "kube-apiserver-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.431305  286555 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.610936  286555 pod_ready.go:94] pod "kube-controller-manager-no-preload-364197" is "Ready"
	I0919 23:13:36.610963  286555 pod_ready.go:86] duration metric: took 179.625984ms for pod "kube-controller-manager-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.056701  304826 kubeadm.go:875] updating cluster {Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:13:36.056877  304826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:13:36.057030  304826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:13:36.099591  304826 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:13:36.099615  304826 containerd.go:534] Images already preloaded, skipping extraction
	I0919 23:13:36.099675  304826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:13:36.143373  304826 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:13:36.143413  304826 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:13:36.143421  304826 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 containerd true true} ...
	I0919 23:13:36.143508  304826 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-312465 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:13:36.143562  304826 ssh_runner.go:195] Run: sudo crictl info
	I0919 23:13:36.185797  304826 cni.go:84] Creating CNI manager for ""
	I0919 23:13:36.185828  304826 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:13:36.185843  304826 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0919 23:13:36.185875  304826 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-312465 NodeName:newest-cni-312465 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:13:36.186182  304826 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-312465"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:13:36.186269  304826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:13:36.198096  304826 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:13:36.198546  304826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:13:36.214736  304826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0919 23:13:36.244125  304826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:13:36.270995  304826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I0919 23:13:36.295177  304826 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:13:36.299365  304826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:13:36.313119  304826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:13:36.396378  304826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:13:36.418497  304826 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465 for IP: 192.168.94.2
	I0919 23:13:36.418522  304826 certs.go:194] generating shared ca certs ...
	I0919 23:13:36.418544  304826 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.418705  304826 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 23:13:36.418761  304826 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 23:13:36.418775  304826 certs.go:256] generating profile certs ...
	I0919 23:13:36.418843  304826 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.key
	I0919 23:13:36.418860  304826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.crt with IP's: []
	I0919 23:13:36.531217  304826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.crt ...
	I0919 23:13:36.531247  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.crt: {Name:mk2dead7c7dd4abba877b10a34bd54e0741b0c51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.531436  304826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.key ...
	I0919 23:13:36.531449  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.key: {Name:mkb2dce7d200188d9475ab5211c83bb5dd871bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.531531  304826 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb
	I0919 23:13:36.531547  304826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt.41c88afb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0919 23:13:36.764681  304826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt.41c88afb ...
	I0919 23:13:36.764719  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt.41c88afb: {Name:mkd78eb5b6eba4ac120b530170a9a115208fec96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.764949  304826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb ...
	I0919 23:13:36.764969  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb: {Name:mk23f979dad453c3780b4813b8fc576ea9e94f4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.765077  304826 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt.41c88afb -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt
	I0919 23:13:36.765208  304826 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key
	I0919 23:13:36.765299  304826 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key
	I0919 23:13:36.765323  304826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt with IP's: []
	I0919 23:13:36.811680  286555 pod_ready.go:83] waiting for pod "kube-proxy-t4j4z" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:37.211272  286555 pod_ready.go:94] pod "kube-proxy-t4j4z" is "Ready"
	I0919 23:13:37.211303  286555 pod_ready.go:86] duration metric: took 399.591313ms for pod "kube-proxy-t4j4z" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:37.410092  286555 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:37.810858  286555 pod_ready.go:94] pod "kube-scheduler-no-preload-364197" is "Ready"
	I0919 23:13:37.810890  286555 pod_ready.go:86] duration metric: took 400.769138ms for pod "kube-scheduler-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:37.810907  286555 pod_ready.go:40] duration metric: took 1m15.409243632s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:13:37.871652  286555 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:13:37.873712  286555 out.go:179] * Done! kubectl is now configured to use "no-preload-364197" cluster and "default" namespace by default
	I0919 23:13:33.909671  294587 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 23:13:33.914917  294587 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 23:13:33.914945  294587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 23:13:33.936898  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 23:13:34.176650  294587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:13:34.176752  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:34.176780  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-149888 minikube.k8s.io/updated_at=2025_09_19T23_13_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=default-k8s-diff-port-149888 minikube.k8s.io/primary=true
	I0919 23:13:34.185919  294587 ops.go:34] apiserver oom_adj: -16
	I0919 23:13:34.285582  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:34.786386  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:35.286435  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:35.786591  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:36.286349  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:36.786365  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:37.286088  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:37.786249  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:38.286182  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:38.381035  294587 kubeadm.go:1105] duration metric: took 4.204361703s to wait for elevateKubeSystemPrivileges
	I0919 23:13:38.381076  294587 kubeadm.go:394] duration metric: took 40.106256802s to StartCluster
	I0919 23:13:38.381101  294587 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:38.381208  294587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:13:38.383043  294587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:38.383384  294587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:13:38.383418  294587 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:13:38.383497  294587 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:13:38.383584  294587 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-149888"
	I0919 23:13:38.383599  294587 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-149888"
	I0919 23:13:38.383622  294587 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-149888"
	I0919 23:13:38.383623  294587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-149888"
	I0919 23:13:38.383638  294587 config.go:182] Loaded profile config "default-k8s-diff-port-149888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:38.383654  294587 host.go:66] Checking if "default-k8s-diff-port-149888" exists ...
	I0919 23:13:38.384100  294587 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:13:38.384352  294587 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:13:38.386876  294587 out.go:179] * Verifying Kubernetes components...
	I0919 23:13:38.392366  294587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:13:38.414274  294587 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:13:37.730859  304826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt ...
	I0919 23:13:37.730889  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt: {Name:mka643fd8f3814e682ac62f488ac921be438271e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:37.731102  304826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key ...
	I0919 23:13:37.731122  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key: {Name:mk1e0a6b750f125c5af55b66a1efb72f4537d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:37.731375  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 23:13:37.731416  304826 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 23:13:37.731424  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 23:13:37.731453  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:13:37.731475  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:13:37.731496  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 23:13:37.731531  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:13:37.732086  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:13:37.760205  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:13:37.788964  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:13:37.821273  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:13:37.854511  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 23:13:37.886302  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:13:37.919585  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:13:37.949973  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:13:37.982330  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:13:38.018976  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 23:13:38.049608  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 23:13:38.081886  304826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:13:38.109125  304826 ssh_runner.go:195] Run: openssl version
	I0919 23:13:38.118278  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 23:13:38.133041  304826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 23:13:38.138504  304826 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 23:13:38.138570  304826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 23:13:38.147725  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 23:13:38.160519  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 23:13:38.174178  304826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 23:13:38.179241  304826 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 23:13:38.179303  304826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 23:13:38.188486  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:13:38.203742  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:13:38.216299  304826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:13:38.221016  304826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:13:38.221087  304826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:13:38.229132  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:13:38.242362  304826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:13:38.247181  304826 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:13:38.247247  304826 kubeadm.go:392] StartCluster: {Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:13:38.247335  304826 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 23:13:38.247392  304826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:13:38.289664  304826 cri.go:89] found id: ""
	I0919 23:13:38.289745  304826 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:13:38.300688  304826 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:13:38.314602  304826 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:13:38.314666  304826 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:13:38.328513  304826 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:13:38.328532  304826 kubeadm.go:157] found existing configuration files:
	
	I0919 23:13:38.328573  304826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:13:38.340801  304826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:13:38.340902  304826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:13:38.354142  304826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:13:38.367990  304826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:13:38.368067  304826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:13:38.379710  304826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:13:38.393587  304826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:13:38.393654  304826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:13:38.406457  304826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:13:38.423007  304826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:13:38.423071  304826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:13:38.441889  304826 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:13:38.509349  304826 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:13:38.509425  304826 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:13:38.535354  304826 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:13:38.535436  304826 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:13:38.535487  304826 kubeadm.go:310] OS: Linux
	I0919 23:13:38.535547  304826 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:13:38.535585  304826 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:13:38.535633  304826 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:13:38.535689  304826 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:13:38.535753  304826 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:13:38.535813  304826 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:13:38.535850  304826 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:13:38.535885  304826 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:13:38.621848  304826 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:13:38.622065  304826 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:13:38.622186  304826 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:13:38.630978  304826 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:13:38.415345  294587 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:13:38.415366  294587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:13:38.415418  294587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-149888
	I0919 23:13:38.415735  294587 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-149888"
	I0919 23:13:38.415780  294587 host.go:66] Checking if "default-k8s-diff-port-149888" exists ...
	I0919 23:13:38.416297  294587 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:13:38.445969  294587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/default-k8s-diff-port-149888/id_rsa Username:docker}
	I0919 23:13:38.447208  294587 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:13:38.447231  294587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:13:38.447297  294587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-149888
	I0919 23:13:38.480457  294587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/default-k8s-diff-port-149888/id_rsa Username:docker}
	I0919 23:13:38.540300  294587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:13:38.557619  294587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:13:38.594341  294587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:13:38.630764  294587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:13:38.799085  294587 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0919 23:13:38.800978  294587 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-149888" to be "Ready" ...
	I0919 23:13:38.812605  294587 node_ready.go:49] node "default-k8s-diff-port-149888" is "Ready"
	I0919 23:13:38.812642  294587 node_ready.go:38] duration metric: took 11.622008ms for node "default-k8s-diff-port-149888" to be "Ready" ...
	I0919 23:13:38.812666  294587 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:13:38.812750  294587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:13:39.036443  294587 api_server.go:72] duration metric: took 652.97537ms to wait for apiserver process to appear ...
	I0919 23:13:39.036471  294587 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:13:39.036490  294587 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0919 23:13:39.043372  294587 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0919 23:13:39.047190  294587 api_server.go:141] control plane version: v1.34.0
	I0919 23:13:39.047226  294587 api_server.go:131] duration metric: took 10.747839ms to wait for apiserver health ...
	I0919 23:13:39.047237  294587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:13:39.049788  294587 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W0919 23:13:36.317685  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:38.318647  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	I0919 23:13:39.819987  295194 pod_ready.go:94] pod "coredns-66bc5c9577-t6v26" is "Ready"
	I0919 23:13:39.820015  295194 pod_ready.go:86] duration metric: took 37.509771492s for pod "coredns-66bc5c9577-t6v26" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.822985  295194 pod_ready.go:83] waiting for pod "etcd-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.827553  295194 pod_ready.go:94] pod "etcd-embed-certs-403962" is "Ready"
	I0919 23:13:39.827574  295194 pod_ready.go:86] duration metric: took 4.567201ms for pod "etcd-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.829949  295194 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.834015  295194 pod_ready.go:94] pod "kube-apiserver-embed-certs-403962" is "Ready"
	I0919 23:13:39.834041  295194 pod_ready.go:86] duration metric: took 4.068136ms for pod "kube-apiserver-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.836103  295194 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:40.014492  295194 pod_ready.go:94] pod "kube-controller-manager-embed-certs-403962" is "Ready"
	I0919 23:13:40.014519  295194 pod_ready.go:86] duration metric: took 178.389529ms for pod "kube-controller-manager-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:40.214694  295194 pod_ready.go:83] waiting for pod "kube-proxy-5tf2s" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:40.614193  295194 pod_ready.go:94] pod "kube-proxy-5tf2s" is "Ready"
	I0919 23:13:40.614222  295194 pod_ready.go:86] duration metric: took 399.49287ms for pod "kube-proxy-5tf2s" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:40.814999  295194 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:41.214398  295194 pod_ready.go:94] pod "kube-scheduler-embed-certs-403962" is "Ready"
	I0919 23:13:41.214429  295194 pod_ready.go:86] duration metric: took 399.403485ms for pod "kube-scheduler-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:41.214439  295194 pod_ready.go:40] duration metric: took 38.913620351s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:13:41.267599  295194 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:13:41.270700  295194 out.go:179] * Done! kubectl is now configured to use "embed-certs-403962" cluster and "default" namespace by default
	I0919 23:13:38.634403  304826 out.go:252]   - Generating certificates and keys ...
	I0919 23:13:38.634645  304826 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:13:38.634729  304826 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:13:38.733514  304826 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:13:39.062476  304826 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:13:39.133445  304826 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:13:39.439953  304826 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:13:39.872072  304826 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:13:39.872221  304826 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-312465] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0919 23:13:39.972922  304826 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:13:39.973129  304826 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-312465] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0919 23:13:40.957549  304826 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:13:41.144394  304826 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:13:41.426739  304826 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:13:41.426849  304826 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:13:41.554555  304826 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:13:41.608199  304826 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:13:41.645796  304826 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:13:41.778911  304826 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:13:41.900942  304826 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:13:41.901396  304826 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:13:41.905522  304826 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:13:41.907209  304826 out.go:252]   - Booting up control plane ...
	I0919 23:13:41.907335  304826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:13:41.907460  304826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:13:41.907982  304826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:13:41.919781  304826 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:13:41.919920  304826 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:13:41.926298  304826 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:13:41.926476  304826 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:13:41.926547  304826 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:13:42.017500  304826 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:13:42.017660  304826 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:13:39.052217  294587 addons.go:514] duration metric: took 668.711417ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 23:13:39.053005  294587 system_pods.go:59] 9 kube-system pods found
	I0919 23:13:39.053044  294587 system_pods.go:61] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.053057  294587 system_pods.go:61] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.053070  294587 system_pods.go:61] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:13:39.053085  294587 system_pods.go:61] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 23:13:39.053092  294587 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:39.053105  294587 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:39.053113  294587 system_pods.go:61] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:39.053135  294587 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:39.053144  294587 system_pods.go:61] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:39.053189  294587 system_pods.go:74] duration metric: took 5.910482ms to wait for pod list to return data ...
	I0919 23:13:39.053205  294587 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:13:39.055828  294587 default_sa.go:45] found service account: "default"
	I0919 23:13:39.055846  294587 default_sa.go:55] duration metric: took 2.635401ms for default service account to be created ...
	I0919 23:13:39.055855  294587 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:13:39.058754  294587 system_pods.go:86] 9 kube-system pods found
	I0919 23:13:39.058787  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.058797  294587 system_pods.go:89] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.058807  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:13:39.058821  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 23:13:39.058830  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:39.058841  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:39.058846  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:39.058852  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:39.058857  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:39.058878  294587 retry.go:31] will retry after 270.945985ms: missing components: kube-dns, kube-proxy
	I0919 23:13:39.304737  294587 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-149888" context rescaled to 1 replicas
	I0919 23:13:39.337213  294587 system_pods.go:86] 9 kube-system pods found
	I0919 23:13:39.337253  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.337265  294587 system_pods.go:89] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.337271  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:39.337278  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:39.337284  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:39.337290  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:39.337298  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:39.337305  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:39.337314  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:39.337335  294587 retry.go:31] will retry after 357.220825ms: missing components: kube-dns
	I0919 23:13:39.698915  294587 system_pods.go:86] 9 kube-system pods found
	I0919 23:13:39.698949  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.698958  294587 system_pods.go:89] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.698966  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:39.698975  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:39.698980  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:39.698987  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:39.698995  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:39.699002  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:39.699013  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:39.699035  294587 retry.go:31] will retry after 375.514546ms: missing components: kube-dns
	I0919 23:13:40.079067  294587 system_pods.go:86] 9 kube-system pods found
	I0919 23:13:40.079105  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:40.079117  294587 system_pods.go:89] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:40.079125  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:40.079131  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:40.079136  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:40.079141  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:40.079148  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:40.079191  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:40.079199  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:40.079216  294587 retry.go:31] will retry after 558.632768ms: missing components: kube-dns
	I0919 23:13:40.643894  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:40.643930  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:40.643938  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:40.643947  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:40.643953  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:40.643960  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:40.643970  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:40.643983  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:40.643989  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:40.644010  294587 retry.go:31] will retry after 761.400913ms: missing components: kube-dns
	I0919 23:13:41.410199  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:41.410236  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:41.410250  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:41.410257  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:41.410263  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:41.410269  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:41.410277  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:41.410285  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:41.410291  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:41.410312  294587 retry.go:31] will retry after 629.477098ms: missing components: kube-dns
	I0919 23:13:42.043664  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:42.043705  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:42.043715  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:42.043724  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:42.043729  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:42.043739  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:42.043747  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:42.043753  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:42.043762  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:42.043778  294587 retry.go:31] will retry after 1.069085397s: missing components: kube-dns
	I0919 23:13:43.117253  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:43.117290  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:43.117297  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:43.117305  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:43.117308  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:43.117312  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:43.117318  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:43.117322  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:43.117326  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:43.117339  294587 retry.go:31] will retry after 1.031094562s: missing components: kube-dns
	I0919 23:13:44.153419  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:44.153454  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:44.153460  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:44.153467  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:44.153472  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:44.153475  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:44.153480  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:44.153484  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:44.153487  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:44.153499  294587 retry.go:31] will retry after 1.715155668s: missing components: kube-dns
	I0919 23:13:45.873736  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:45.873776  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:45.873786  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:45.873794  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:45.873800  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:45.873805  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:45.873820  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:45.873826  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:45.873832  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:45.873863  294587 retry.go:31] will retry after 2.128059142s: missing components: kube-dns
	I0919 23:13:48.006564  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:48.006602  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:48.006610  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:48.006618  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:48.006624  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:48.006630  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:48.006635  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:48.006640  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:48.006647  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:48.006662  294587 retry.go:31] will retry after 1.782367114s: missing components: kube-dns
	I0919 23:13:50.518700  304826 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 8.501106835s
	I0919 23:13:50.522818  304826 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:13:50.522974  304826 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I0919 23:13:50.523114  304826 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:13:50.523256  304826 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:13:49.793148  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:49.793210  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:49.793217  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:49.793223  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:49.793229  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:49.793232  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:49.793243  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:49.793246  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:49.793251  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:49.793265  294587 retry.go:31] will retry after 2.338572613s: missing components: kube-dns
	I0919 23:13:52.140344  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:52.140388  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:52.140397  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:52.140407  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:52.140413  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:52.140419  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:52.140428  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:52.140435  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:52.140442  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:52.140471  294587 retry.go:31] will retry after 3.086457646s: missing components: kube-dns
	I0919 23:13:52.884946  304826 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.362051829s
	I0919 23:13:53.462893  304826 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.939923299s
	I0919 23:13:55.526762  304826 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.001364253s
	I0919 23:13:55.539011  304826 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:13:55.554378  304826 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:13:55.568644  304826 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:13:55.568919  304826 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-312465 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:13:55.589739  304826 kubeadm.go:310] [bootstrap-token] Using token: jlnn4o.ezmdj0dkuh5aygdp
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	d604f10e2b862       523cad1a4df73       2 seconds ago        Exited              dashboard-metrics-scraper   4                   ae9d710a70595       dashboard-metrics-scraper-6ffb444bf9-vdb6s
	18e79812d03fa       07655ddf2eebe       10 seconds ago       Running             kubernetes-dashboard        2                   e0f9415a2add2       kubernetes-dashboard-855c9754f9-rj7g8
	84fd26f46a650       6e38f40d628db       49 seconds ago       Running             storage-provisioner         2                   a8ae1cab0b9d5       storage-provisioner
	1c4ad1fabb59c       df0860106674d       55 seconds ago       Running             kube-proxy                  3                   17832c7bff5ef       kube-proxy-t4j4z
	265b08adaae73       07655ddf2eebe       57 seconds ago       Exited              kubernetes-dashboard        1                   e0f9415a2add2       kubernetes-dashboard-855c9754f9-rj7g8
	02daa0a76ef2f       df0860106674d       About a minute ago   Exited              kube-proxy                  2                   17832c7bff5ef       kube-proxy-t4j4z
	70e9a93e23676       409467f978b4a       About a minute ago   Running             kindnet-cni                 1                   8c60e0ef68a3d       kindnet-89psw
	e6ce4bf79ede2       56cc512116c8f       About a minute ago   Running             busybox                     1                   b54c425145c6f       busybox
	658e15ecef2da       52546a367cc9e       About a minute ago   Running             coredns                     1                   ea0e1058dc597       coredns-66bc5c9577-xg99k
	b99df30ad6006       6e38f40d628db       About a minute ago   Exited              storage-provisioner         1                   a8ae1cab0b9d5       storage-provisioner
	9cfac78cf1230       46169d968e920       About a minute ago   Running             kube-scheduler              1                   667c2756f1609       kube-scheduler-no-preload-364197
	cefe6d56503ab       a0af72f2ec6d6       About a minute ago   Running             kube-controller-manager     1                   c42fe513623c8       kube-controller-manager-no-preload-364197
	6bc33bc1397be       90550c43ad2bc       About a minute ago   Running             kube-apiserver              1                   1c48d024bf452       kube-apiserver-no-preload-364197
	43f594ae64f30       5f1f5298c888d       About a minute ago   Running             etcd                        1                   3699fbc2f2946       etcd-no-preload-364197
	caa9c3c72ac21       56cc512116c8f       2 minutes ago        Exited              busybox                     0                   6708bd673a9d3       busybox
	81516948c500c       52546a367cc9e       2 minutes ago        Exited              coredns                     0                   ddc11fc450968       coredns-66bc5c9577-xg99k
	05a104972ade2       409467f978b4a       2 minutes ago        Exited              kindnet-cni                 0                   5ff81d48d2530       kindnet-89psw
	4033582ceab6b       46169d968e920       2 minutes ago        Exited              kube-scheduler              0                   7363cbc4e9ca2       kube-scheduler-no-preload-364197
	4f4b7cb19d71d       90550c43ad2bc       2 minutes ago        Exited              kube-apiserver              0                   66d2c4b632c14       kube-apiserver-no-preload-364197
	22f8dfd9e25c5       5f1f5298c888d       2 minutes ago        Exited              etcd                        0                   b9ab3a8d8a6f5       etcd-no-preload-364197
	a725a4bde25d4       a0af72f2ec6d6       2 minutes ago        Exited              kube-controller-manager     0                   decb39ebcb9ea       kube-controller-manager-no-preload-364197
	
	
	==> containerd <==
	Sep 19 23:13:46 no-preload-364197 containerd[472]: time="2025-09-19T23:13:46.235478866Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 23:13:46 no-preload-364197 containerd[472]: time="2025-09-19T23:13:46.281311320Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 19 23:13:46 no-preload-364197 containerd[472]: time="2025-09-19T23:13:46.283027512Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 19 23:13:46 no-preload-364197 containerd[472]: time="2025-09-19T23:13:46.283099128Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 19 23:13:47 no-preload-364197 containerd[472]: time="2025-09-19T23:13:47.233298876Z" level=info msg="CreateContainer within sandbox \"e0f9415a2add2d1cb72ecfb8b8814a682dd2353cfa15818eb3d83ef7c26e3991\" for container &ContainerMetadata{Name:kubernetes-dashboard,Attempt:2,}"
	Sep 19 23:13:47 no-preload-364197 containerd[472]: time="2025-09-19T23:13:47.246747288Z" level=info msg="CreateContainer within sandbox \"e0f9415a2add2d1cb72ecfb8b8814a682dd2353cfa15818eb3d83ef7c26e3991\" for &ContainerMetadata{Name:kubernetes-dashboard,Attempt:2,} returns container id \"18e79812d03fa9def5117aa7a3a884d976dfdc94c077b748a590d0c5307a33b4\""
	Sep 19 23:13:47 no-preload-364197 containerd[472]: time="2025-09-19T23:13:47.247407489Z" level=info msg="StartContainer for \"18e79812d03fa9def5117aa7a3a884d976dfdc94c077b748a590d0c5307a33b4\""
	Sep 19 23:13:47 no-preload-364197 containerd[472]: time="2025-09-19T23:13:47.308656259Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 23:13:47 no-preload-364197 containerd[472]: time="2025-09-19T23:13:47.313650727Z" level=info msg="StartContainer for \"18e79812d03fa9def5117aa7a3a884d976dfdc94c077b748a590d0c5307a33b4\" returns successfully"
	Sep 19 23:13:53 no-preload-364197 containerd[472]: time="2025-09-19T23:13:53.897893604Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Sep 19 23:13:54 no-preload-364197 containerd[472]: time="2025-09-19T23:13:54.964067655Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 23:13:54 no-preload-364197 containerd[472]: time="2025-09-19T23:13:54.968010000Z" level=info msg="CreateContainer within sandbox \"ae9d710a70595a7446738d42b81ab47b7adbeb6ff153ccd27855328e05603a08\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,}"
	Sep 19 23:13:54 no-preload-364197 containerd[472]: time="2025-09-19T23:13:54.990681238Z" level=info msg="CreateContainer within sandbox \"ae9d710a70595a7446738d42b81ab47b7adbeb6ff153ccd27855328e05603a08\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"d604f10e2b862ad4bdf7f3512e5ecb35d331257efc4e8ff148379c1af7087e06\""
	Sep 19 23:13:54 no-preload-364197 containerd[472]: time="2025-09-19T23:13:54.992854818Z" level=info msg="StartContainer for \"d604f10e2b862ad4bdf7f3512e5ecb35d331257efc4e8ff148379c1af7087e06\""
	Sep 19 23:13:55 no-preload-364197 containerd[472]: time="2025-09-19T23:13:55.015826564Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 19 23:13:55 no-preload-364197 containerd[472]: time="2025-09-19T23:13:55.018688946Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 19 23:13:55 no-preload-364197 containerd[472]: time="2025-09-19T23:13:55.018722649Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 19 23:13:55 no-preload-364197 containerd[472]: time="2025-09-19T23:13:55.082261675Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 23:13:55 no-preload-364197 containerd[472]: time="2025-09-19T23:13:55.087629520Z" level=info msg="StartContainer for \"d604f10e2b862ad4bdf7f3512e5ecb35d331257efc4e8ff148379c1af7087e06\" returns successfully"
	Sep 19 23:13:55 no-preload-364197 containerd[472]: time="2025-09-19T23:13:55.107016045Z" level=info msg="received exit event container_id:\"d604f10e2b862ad4bdf7f3512e5ecb35d331257efc4e8ff148379c1af7087e06\"  id:\"d604f10e2b862ad4bdf7f3512e5ecb35d331257efc4e8ff148379c1af7087e06\"  pid:3562  exit_status:1  exited_at:{seconds:1758323635  nanos:106730311}"
	Sep 19 23:13:55 no-preload-364197 containerd[472]: time="2025-09-19T23:13:55.140337880Z" level=info msg="shim disconnected" id=d604f10e2b862ad4bdf7f3512e5ecb35d331257efc4e8ff148379c1af7087e06 namespace=k8s.io
	Sep 19 23:13:55 no-preload-364197 containerd[472]: time="2025-09-19T23:13:55.140377888Z" level=warning msg="cleaning up after shim disconnected" id=d604f10e2b862ad4bdf7f3512e5ecb35d331257efc4e8ff148379c1af7087e06 namespace=k8s.io
	Sep 19 23:13:55 no-preload-364197 containerd[472]: time="2025-09-19T23:13:55.140389386Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 19 23:13:55 no-preload-364197 containerd[472]: time="2025-09-19T23:13:55.758918068Z" level=info msg="RemoveContainer for \"b9a3ddd53c0982f2b6456f555d5b0736bed2fe411350d742099824f5f0e34944\""
	Sep 19 23:13:55 no-preload-364197 containerd[472]: time="2025-09-19T23:13:55.764117812Z" level=info msg="RemoveContainer for \"b9a3ddd53c0982f2b6456f555d5b0736bed2fe411350d742099824f5f0e34944\" returns successfully"
	
	
	==> coredns [658e15ecef2dafe3d0bf9b9edb26ac278640956ddad27a4b7a3c62bc89fb2506] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:56992 - 40363 "HINFO IN 3109128270378564685.5637816105214737005. udp 57 false 512" - - 0 2.000323007s
	[ERROR] plugin/errors: 2 3109128270378564685.5637816105214737005. HINFO: read udp 10.244.0.2:38291->192.168.85.1:53: i/o timeout
	[INFO] 127.0.0.1:56564 - 29966 "HINFO IN 3109128270378564685.5637816105214737005. udp 57 false 512" - - 0 2.001044941s
	[ERROR] plugin/errors: 2 3109128270378564685.5637816105214737005. HINFO: read udp 10.244.0.2:34418->192.168.85.1:53: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] 127.0.0.1:43699 - 39336 "HINFO IN 3109128270378564685.5637816105214737005. udp 57 false 512" - - 0 2.000715331s
	[ERROR] plugin/errors: 2 3109128270378564685.5637816105214737005. HINFO: read udp 10.244.0.2:47656->192.168.85.1:53: i/o timeout
	[INFO] 127.0.0.1:40867 - 39608 "HINFO IN 3109128270378564685.5637816105214737005. udp 57 false 512" - - 0 2.001048794s
	[ERROR] plugin/errors: 2 3109128270378564685.5637816105214737005. HINFO: read udp 10.244.0.2:46436->192.168.85.1:53: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [81516948c500cddd74ea5e02f4e3d75fcaf2b7d2aef946d84a3656def8fdf90b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51151 - 25647 "HINFO IN 2174771630841354638.2989325533945903380. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062767086s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               no-preload-364197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-364197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=no-preload-364197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_11_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:11:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-364197
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:13:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:13:53 +0000   Fri, 19 Sep 2025 23:11:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:13:53 +0000   Fri, 19 Sep 2025 23:11:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:13:53 +0000   Fri, 19 Sep 2025 23:11:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:13:53 +0000   Fri, 19 Sep 2025 23:11:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-364197
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 18b5d0a888534bc6af7b0590d1485844
	  System UUID:                dddc9917-cb17-435a-a3e1-cd4a58751c59
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 coredns-66bc5c9577-xg99k                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m46s
	  kube-system                 etcd-no-preload-364197                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m52s
	  kube-system                 kindnet-89psw                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m47s
	  kube-system                 kube-apiserver-no-preload-364197              250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 kube-controller-manager-no-preload-364197     200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 kube-proxy-t4j4z                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  kube-system                 kube-scheduler-no-preload-364197              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 metrics-server-746fcd58dc-54wcq               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m46s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vdb6s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rj7g8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 2m45s              kube-proxy       
	  Normal  Starting                 2m52s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m52s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m52s              kubelet          Node no-preload-364197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s              kubelet          Node no-preload-364197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s              kubelet          Node no-preload-364197 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m47s              node-controller  Node no-preload-364197 event: Registered Node no-preload-364197 in Controller
	  Normal  NodeHasNoDiskPressure    99s (x8 over 99s)  kubelet          Node no-preload-364197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  99s (x8 over 99s)  kubelet          Node no-preload-364197 status is now: NodeHasSufficientMemory
	  Normal  Starting                 99s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     99s (x7 over 99s)  kubelet          Node no-preload-364197 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           93s                node-controller  Node no-preload-364197 event: Registered Node no-preload-364197 in Controller
	  Normal  Starting                 5s                 kubelet          Starting kubelet.
	  Normal  Starting                 5s                 kubelet          Starting kubelet.
	  Normal  Starting                 4s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4s                 kubelet          Node no-preload-364197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s                 kubelet          Node no-preload-364197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s                 kubelet          Node no-preload-364197 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.995983] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.504855] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500830] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.994952] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.505945] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501588] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.993278] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.507907] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500995] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.992251] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.509112] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501500] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989799] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.510643] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501653] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989383] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.511830] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500482] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989056] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.512088] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500241] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501649] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501291] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501422] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[Sep19 23:13] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	
	
	==> etcd [22f8dfd9e25c5e22c40757e1e8d4aca05929cb7e5bacc483cd10eca2a6cbaf53] <==
	{"level":"warn","ts":"2025-09-19T23:11:02.022915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.033669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.047563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.055349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.066708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.077867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.085100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.095050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.104401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.111576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.119779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.127986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.135874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.143987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.151894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.159535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.167785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.174810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.187644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.200143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.209910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.216841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.225643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:02.274234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60420","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T23:11:10.491642Z","caller":"traceutil/trace.go:172","msg":"trace[585345196] transaction","detail":"{read_only:false; response_revision:301; number_of_response:1; }","duration":"132.970471ms","start":"2025-09-19T23:11:10.358639Z","end":"2025-09-19T23:11:10.491610Z","steps":["trace[585345196] 'process raft request'  (duration: 132.788165ms)"],"step_count":1}
	
	
	==> etcd [43f594ae64f30adcfece56bff232d9a5d66d10b57aa5eb81dcd23096c4d9fefe] <==
	{"level":"warn","ts":"2025-09-19T23:12:20.419258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.427563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.434267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.449875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.457062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.464584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.472378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.479128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.485710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.493422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.501409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.508450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.514933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.522123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.529109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.536591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.549358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:20.557772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:12:51.745932Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.388145ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722595826412087472 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-sbo7gcm2fodinrvdtgquuedjoq\" mod_revision:692 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-sbo7gcm2fodinrvdtgquuedjoq\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-sbo7gcm2fodinrvdtgquuedjoq\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:12:51.746065Z","caller":"traceutil/trace.go:172","msg":"trace[735937185] transaction","detail":"{read_only:false; response_revision:701; number_of_response:1; }","duration":"144.569588ms","start":"2025-09-19T23:12:51.601479Z","end":"2025-09-19T23:12:51.746048Z","steps":["trace[735937185] 'process raft request'  (duration: 23.342875ms)","trace[735937185] 'compare'  (duration: 120.100805ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:12:52.154410Z","caller":"traceutil/trace.go:172","msg":"trace[275387585] transaction","detail":"{read_only:false; response_revision:702; number_of_response:1; }","duration":"102.189119ms","start":"2025-09-19T23:12:52.052200Z","end":"2025-09-19T23:12:52.154389Z","steps":["trace[275387585] 'process raft request'  (duration: 102.037538ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:12:52.658614Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.154537ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722595826412087483 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:06ed99643ffce8ba>","response":"size:40"}
	{"level":"warn","ts":"2025-09-19T23:13:12.807135Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.413548ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:474"}
	{"level":"info","ts":"2025-09-19T23:13:12.807245Z","caller":"traceutil/trace.go:172","msg":"trace[1679246197] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:752; }","duration":"182.593348ms","start":"2025-09-19T23:13:12.624636Z","end":"2025-09-19T23:13:12.807230Z","steps":["trace[1679246197] 'range keys from in-memory index tree'  (duration: 182.267088ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:13:22.201794Z","caller":"traceutil/trace.go:172","msg":"trace[879742454] transaction","detail":"{read_only:false; response_revision:759; number_of_response:1; }","duration":"122.917396ms","start":"2025-09-19T23:13:22.078856Z","end":"2025-09-19T23:13:22.201774Z","steps":["trace[879742454] 'process raft request'  (duration: 122.772171ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:13:57 up  1:56,  0 users,  load average: 5.01, 3.90, 2.42
	Linux no-preload-364197 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [05a104972ade2c044b7ecfd589a3e6279429a5573d073092fa3575eb43f33fb6] <==
	I0919 23:11:15.104537       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0919 23:11:15.104827       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0919 23:11:15.105011       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:11:15.105033       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:11:15.105056       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:11:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:11:15.405038       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:11:15.405078       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:11:15.405089       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:11:15.405909       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0919 23:11:15.706466       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:11:15.706551       1 metrics.go:72] Registering metrics
	I0919 23:11:15.770426       1 controller.go:711] "Syncing nftables rules"
	I0919 23:11:25.408283       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:11:25.408356       1 main.go:301] handling current node
	I0919 23:11:35.414272       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:11:35.414304       1 main.go:301] handling current node
	I0919 23:11:45.409230       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:11:45.409262       1 main.go:301] handling current node
	I0919 23:11:55.407306       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:11:55.407358       1 main.go:301] handling current node
	
	
	==> kindnet [70e9a93e23676ba38f41c165c77120324ad986079be2f1dda89a089c06e82ec7] <==
	I0919 23:12:22.493733       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:12:22.493752       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:12:22.493772       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:12:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:12:22.749276       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:12:22.793094       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:12:22.793255       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:12:22.793491       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0919 23:12:52.794505       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0919 23:12:52.794505       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0919 23:12:52.794556       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0919 23:12:52.794624       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0919 23:13:23.957643       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0919 23:13:23.968096       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0919 23:13:23.970968       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0919 23:13:24.173171       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I0919 23:13:26.893920       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:13:26.893957       1 metrics.go:72] Registering metrics
	I0919 23:13:26.894022       1 controller.go:711] "Syncing nftables rules"
	I0919 23:13:32.749544       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:13:32.749603       1 main.go:301] handling current node
	I0919 23:13:42.754230       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:13:42.754277       1 main.go:301] handling current node
	I0919 23:13:52.749266       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:13:52.749294       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4f4b7cb19d71d424c1f0b8eed4886e134667affbc967fc67d6ab5091a5ec5afc] <==
	I0919 23:11:10.760678       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 23:11:10.811774       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 23:11:10.817215       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 23:11:10.860056       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 23:11:10.860056       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E0919 23:11:58.296051       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:55532: use of closed network connection
	I0919 23:11:59.083820       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 23:11:59.089604       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:11:59.089675       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0919 23:11:59.089795       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0919 23:11:59.174580       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.96.51.251"}
	W0919 23:11:59.185075       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:11:59.185137       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0919 23:11:59.188366       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	W0919 23:11:59.192847       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:11:59.192908       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [6bc33bc1397bee9ae4cddae4044808fcc50a7a9cf5d158ec3fa6eb80e16e52ab] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:12:22.109568       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 23:12:24.482118       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 23:12:24.780094       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 23:12:24.930398       1 controller.go:667] quota admission added evaluator for: endpoints
	I0919 23:12:24.930398       1 controller.go:667] quota admission added evaluator for: endpoints
	I0919 23:12:25.035486       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	E0919 23:13:21.000684       1 dynamic_cafile_content.go:170] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:13:21.000906       1 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:13:21.002020       1 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:13:21.002185       1 dynamic_cafile_content.go:170] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:13:21.002209       1 dynamic_cafile_content.go:170] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	W0919 23:13:22.109349       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:13:22.109405       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 23:13:22.109422       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 23:13:22.110653       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:13:22.110935       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:13:22.110963       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 23:13:32.519551       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 23:13:49.409412       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [a725a4bde25d4c27402431bef26b0b2528b9ee8ce86de9668a3bc6c57218ae97] <==
	I0919 23:11:10.057104       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 23:11:10.057526       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 23:11:10.057666       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 23:11:10.058953       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 23:11:10.058995       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 23:11:10.061668       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 23:11:10.064887       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 23:11:10.066080       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0919 23:11:10.066140       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0919 23:11:10.066247       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0919 23:11:10.066263       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0919 23:11:10.066270       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0919 23:11:10.069513       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:11:10.075840       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-364197" podCIDRs=["10.244.0.0/24"]
	I0919 23:11:10.075887       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 23:11:10.077851       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 23:11:10.088352       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:11:10.090511       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 23:11:10.098863       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0919 23:11:10.101303       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 23:11:10.101309       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 23:11:10.102452       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 23:11:10.110033       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0919 23:11:10.110532       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0919 23:11:10.114488       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [cefe6d56503ab49c05eeb71c647a5070bf2298f7e5024960e954a0fa1becced9] <==
	I0919 23:12:24.416697       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 23:12:24.419047       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 23:12:24.425953       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 23:12:24.426913       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 23:12:24.426975       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 23:12:24.426974       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0919 23:12:24.426990       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0919 23:12:24.427030       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0919 23:12:24.427114       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:12:24.427130       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 23:12:24.427136       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 23:12:24.431910       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:12:24.433192       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:12:24.442536       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 23:12:24.448291       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0919 23:12:54.438195       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:12:54.458746       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:13:23.049966       1 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:13:23.049965       1 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:13:23.049999       1 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:13:23.050417       1 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:13:24.445094       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:13:24.469586       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:13:54.452497       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:13:54.483957       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [02daa0a76ef2fe82a3406571b3c039ffa479d47918842e391ad088b0d5deba09] <==
	E0919 23:12:36.414777       1 run.go:72] "command failed" err="failed complete: too many open files"
	
	
	==> kube-proxy [1c4ad1fabb59c4e57a9b67d110a8968a1a0978b942894c62df9d88aa2fdda568] <==
	I0919 23:13:02.425116       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:13:02.490127       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:13:02.591097       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:13:02.591147       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0919 23:13:02.591271       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:13:02.632139       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:13:02.632243       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:13:02.642997       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:13:02.643690       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:13:02.643744       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:13:02.645359       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:13:02.645377       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:13:02.645444       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:13:02.645455       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:13:02.645494       1 config.go:200] "Starting service config controller"
	I0919 23:13:02.645503       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:13:02.645742       1 config.go:309] "Starting node config controller"
	I0919 23:13:02.645756       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:13:02.746093       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 23:13:02.746189       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 23:13:02.746739       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:13:02.746782       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4033582ceab6ba6b5c4f950f6070e6fc0d1d797c421c10c1de6e06129df50b54] <==
	E0919 23:11:02.941574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 23:11:02.941640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0919 23:11:02.942100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 23:11:02.943275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 23:11:02.943468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 23:11:02.943586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 23:11:02.942276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 23:11:02.943748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 23:11:02.943835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 23:11:02.944059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 23:11:02.943599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 23:11:02.945060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 23:11:03.804349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 23:11:03.812989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 23:11:03.836582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0919 23:11:03.886936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 23:11:03.907211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 23:11:04.034583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 23:11:04.045992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 23:11:04.118751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 23:11:04.139566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 23:11:04.202019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 23:11:04.212421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 23:11:04.252425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I0919 23:11:06.135259       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [9cfac78cf12303fd4f547a1f839aba05a7b01f153dcae55f2e060f69f98c8e8d] <==
	I0919 23:12:19.500263       1 serving.go:386] Generated self-signed cert in-memory
	W0919 23:12:21.023505       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 23:12:21.023536       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 23:12:21.023548       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 23:12:21.023558       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 23:12:21.057122       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 23:12:21.057189       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:12:21.060208       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:12:21.060284       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:12:21.060670       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 23:12:21.061093       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 23:12:21.160477       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 23:13:53 no-preload-364197 kubelet[3246]: I0919 23:13:53.898219    3246 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.653255    3246 apiserver.go:52] "Watching apiserver"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.665557    3246 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.678464    3246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cc8840e-ebe9-4d2e-8e31-c00341a52c4a-lib-modules\") pod \"kube-proxy-t4j4z\" (UID: \"6cc8840e-ebe9-4d2e-8e31-c00341a52c4a\") " pod="kube-system/kube-proxy-t4j4z"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.678551    3246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/912ae03a-d62e-4d7d-8184-e16295d5ab7d-lib-modules\") pod \"kindnet-89psw\" (UID: \"912ae03a-d62e-4d7d-8184-e16295d5ab7d\") " pod="kube-system/kindnet-89psw"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.678906    3246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/912ae03a-d62e-4d7d-8184-e16295d5ab7d-cni-cfg\") pod \"kindnet-89psw\" (UID: \"912ae03a-d62e-4d7d-8184-e16295d5ab7d\") " pod="kube-system/kindnet-89psw"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.679390    3246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e3539cd4-a4d0-4bd2-a0c5-34c7cc316493-tmp\") pod \"storage-provisioner\" (UID: \"e3539cd4-a4d0-4bd2-a0c5-34c7cc316493\") " pod="kube-system/storage-provisioner"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.679447    3246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/912ae03a-d62e-4d7d-8184-e16295d5ab7d-xtables-lock\") pod \"kindnet-89psw\" (UID: \"912ae03a-d62e-4d7d-8184-e16295d5ab7d\") " pod="kube-system/kindnet-89psw"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.679866    3246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cc8840e-ebe9-4d2e-8e31-c00341a52c4a-xtables-lock\") pod \"kube-proxy-t4j4z\" (UID: \"6cc8840e-ebe9-4d2e-8e31-c00341a52c4a\") " pod="kube-system/kube-proxy-t4j4z"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.748207    3246 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-no-preload-364197"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.749177    3246 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-no-preload-364197"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.749526    3246 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-no-preload-364197"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.750198    3246 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-no-preload-364197"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: E0919 23:13:54.762219    3246 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-no-preload-364197\" already exists" pod="kube-system/kube-controller-manager-no-preload-364197"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: E0919 23:13:54.769563    3246 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-no-preload-364197\" already exists" pod="kube-system/kube-scheduler-no-preload-364197"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: E0919 23:13:54.771976    3246 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-no-preload-364197\" already exists" pod="kube-system/kube-apiserver-no-preload-364197"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: E0919 23:13:54.773581    3246 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-no-preload-364197\" already exists" pod="kube-system/etcd-no-preload-364197"
	Sep 19 23:13:54 no-preload-364197 kubelet[3246]: I0919 23:13:54.959679    3246 scope.go:117] "RemoveContainer" containerID="b9a3ddd53c0982f2b6456f555d5b0736bed2fe411350d742099824f5f0e34944"
	Sep 19 23:13:55 no-preload-364197 kubelet[3246]: E0919 23:13:55.019349    3246 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 19 23:13:55 no-preload-364197 kubelet[3246]: E0919 23:13:55.019554    3246 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 19 23:13:55 no-preload-364197 kubelet[3246]: E0919 23:13:55.019800    3246 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-54wcq_kube-system(8b7f16ad-5a72-473e-90dc-6ad786e6e753): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" logger="UnhandledError"
	Sep 19 23:13:55 no-preload-364197 kubelet[3246]: E0919 23:13:55.020000    3246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-54wcq" podUID="8b7f16ad-5a72-473e-90dc-6ad786e6e753"
	Sep 19 23:13:55 no-preload-364197 kubelet[3246]: I0919 23:13:55.753350    3246 scope.go:117] "RemoveContainer" containerID="b9a3ddd53c0982f2b6456f555d5b0736bed2fe411350d742099824f5f0e34944"
	Sep 19 23:13:55 no-preload-364197 kubelet[3246]: I0919 23:13:55.754335    3246 scope.go:117] "RemoveContainer" containerID="d604f10e2b862ad4bdf7f3512e5ecb35d331257efc4e8ff148379c1af7087e06"
	Sep 19 23:13:55 no-preload-364197 kubelet[3246]: E0919 23:13:55.754495    3246 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vdb6s_kubernetes-dashboard(5f08ef04-d85c-45e8-87f6-34f9569daf46)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vdb6s" podUID="5f08ef04-d85c-45e8-87f6-34f9569daf46"
	
	
	==> kubernetes-dashboard [18e79812d03fa9def5117aa7a3a884d976dfdc94c077b748a590d0c5307a33b4] <==
	2025/09/19 23:13:47 Using namespace: kubernetes-dashboard
	2025/09/19 23:13:47 Using in-cluster config to connect to apiserver
	2025/09/19 23:13:47 Using secret token for csrf signing
	2025/09/19 23:13:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 23:13:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/19 23:13:47 Successful initial request to the apiserver, version: v1.34.0
	2025/09/19 23:13:47 Generating JWE encryption key
	2025/09/19 23:13:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/19 23:13:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/19 23:13:47 Initializing JWE encryption key from synchronized object
	2025/09/19 23:13:47 Creating in-cluster Sidecar client
	2025/09/19 23:13:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:13:47 Serving insecurely on HTTP port: 9090
	2025/09/19 23:13:47 Starting overwatch
	
	
	==> kubernetes-dashboard [265b08adaae7361b442c8dbb2cece3f4be85d7eeb4a1035bce7e3fc80dfe2381] <==
	2025/09/19 23:13:00 Using namespace: kubernetes-dashboard
	2025/09/19 23:13:00 Using in-cluster config to connect to apiserver
	2025/09/19 23:13:00 Using secret token for csrf signing
	2025/09/19 23:13:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 23:13:00 Starting overwatch
	panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: i/o timeout
	
	goroutine 1 [running]:
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00071fae8)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x30e
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc0003a6100)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:527 +0x94
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0x19aba3a?)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:495 +0x32
	github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:594
	main.main()
		/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:96 +0x1cf
	
	
	==> storage-provisioner [84fd26f46a650832d3eb69def25786df1f79a73ba8e2bd5c0865f96ca1de4b47] <==
	W0919 23:13:33.833423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:35.837449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:35.842313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:37.846113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:37.851511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:39.854626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:39.858791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:41.862516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:41.867222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:43.870358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:43.874583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:45.879778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:45.884429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:47.888303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:47.892616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:49.895851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:49.901215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:51.904950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:51.912664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:53.916617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:53.921735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:55.924911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:55.933147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:57.936822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:57.943001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b99df30ad600675c9bcc7e13b3281021bfd6a2b7e8368cf5d4c7ec80ee03974a] <==
	I0919 23:12:21.859510       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 23:12:51.862764       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-364197 -n no-preload-364197
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-364197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-54wcq
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-364197 describe pod metrics-server-746fcd58dc-54wcq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-364197 describe pod metrics-server-746fcd58dc-54wcq: exit status 1 (76.120419ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-54wcq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-364197 describe pod metrics-server-746fcd58dc-54wcq: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (9.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (9.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-403962 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-403962 --alsologtostderr -v=1: (1.033430096s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-403962 -n embed-certs-403962
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-403962 -n embed-certs-403962: exit status 2 (337.590458ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-403962 -n embed-certs-403962
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-403962 -n embed-certs-403962: exit status 2 (427.21287ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-403962 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-403962 -n embed-certs-403962
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-403962 -n embed-certs-403962: exit status 2 (383.718181ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-403962 -n embed-certs-403962
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-403962 -n embed-certs-403962: exit status 2 (403.604086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-403962
helpers_test.go:243: (dbg) docker inspect embed-certs-403962:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a63af2c8f6f3d51a281ad094e87f99313b1ffdb287d036454c3eac2cd2773ece",
	        "Created": "2025-09-19T23:10:55.400103893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295421,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T23:12:49.952408367Z",
	            "FinishedAt": "2025-09-19T23:12:48.982627779Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/a63af2c8f6f3d51a281ad094e87f99313b1ffdb287d036454c3eac2cd2773ece/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a63af2c8f6f3d51a281ad094e87f99313b1ffdb287d036454c3eac2cd2773ece/hostname",
	        "HostsPath": "/var/lib/docker/containers/a63af2c8f6f3d51a281ad094e87f99313b1ffdb287d036454c3eac2cd2773ece/hosts",
	        "LogPath": "/var/lib/docker/containers/a63af2c8f6f3d51a281ad094e87f99313b1ffdb287d036454c3eac2cd2773ece/a63af2c8f6f3d51a281ad094e87f99313b1ffdb287d036454c3eac2cd2773ece-json.log",
	        "Name": "/embed-certs-403962",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-403962:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-403962",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a63af2c8f6f3d51a281ad094e87f99313b1ffdb287d036454c3eac2cd2773ece",
	                "LowerDir": "/var/lib/docker/overlay2/3a88e7826b7bee87d0fa55c7d6eb85cf6dd27425a2fda0f19e201ef730e85372-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a88e7826b7bee87d0fa55c7d6eb85cf6dd27425a2fda0f19e201ef730e85372/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a88e7826b7bee87d0fa55c7d6eb85cf6dd27425a2fda0f19e201ef730e85372/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a88e7826b7bee87d0fa55c7d6eb85cf6dd27425a2fda0f19e201ef730e85372/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-403962",
	                "Source": "/var/lib/docker/volumes/embed-certs-403962/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-403962",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-403962",
	                "name.minikube.sigs.k8s.io": "embed-certs-403962",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "43af2aa887632a3b88811a0c92a1eb3fc6e55ea6ead5b7bd04d9d10aa51f9ba8",
	            "SandboxKey": "/var/run/docker/netns/43af2aa88763",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-403962": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:eb:e4:f0:48:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eeb244b5b4d931aeb6e8ce39276a990d3a3ab31cb92cb0ad8df9ecee9db3b477",
	                    "EndpointID": "1d01e88f503576060d50fb72d8e5f51c72f1eaebdc6f82076c33e7fc88d3ef99",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-403962",
	                        "a63af2c8f6f3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-403962 -n embed-certs-403962
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-403962 -n embed-certs-403962: exit status 2 (378.25667ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-403962 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-403962 logs -n 25: (2.042017903s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable metrics-server -p no-preload-364197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:11 UTC │ 19 Sep 25 23:11 UTC │
	│ stop    │ -p no-preload-364197 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:11 UTC │ 19 Sep 25 23:12 UTC │
	│ addons  │ enable dashboard -p no-preload-364197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ start   │ -p no-preload-364197 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-403962 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ stop    │ -p embed-certs-403962 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ image   │ old-k8s-version-757990 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ pause   │ -p old-k8s-version-757990 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ unpause │ -p old-k8s-version-757990 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ delete  │ -p old-k8s-version-757990                                                                                                                                                                                                                           │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ delete  │ -p old-k8s-version-757990                                                                                                                                                                                                                           │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ delete  │ -p disable-driver-mounts-606373                                                                                                                                                                                                                     │ disable-driver-mounts-606373 │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ start   │ -p default-k8s-diff-port-149888 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-149888 │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-403962 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ start   │ -p embed-certs-403962 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:13 UTC │
	│ start   │ -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-430859    │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-430859    │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ delete  │ -p kubernetes-upgrade-430859                                                                                                                                                                                                                        │ kubernetes-upgrade-430859    │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ start   │ -p newest-cni-312465 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-312465            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │                     │
	│ image   │ no-preload-364197 image list --format=json                                                                                                                                                                                                          │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ pause   │ -p no-preload-364197 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ unpause │ -p no-preload-364197 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ image   │ embed-certs-403962 image list --format=json                                                                                                                                                                                                         │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ pause   │ -p embed-certs-403962 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ unpause │ -p embed-certs-403962 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:13:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:13:27.238593  304826 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:13:27.238920  304826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:13:27.238933  304826 out.go:374] Setting ErrFile to fd 2...
	I0919 23:13:27.238939  304826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:13:27.239301  304826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 23:13:27.240254  304826 out.go:368] Setting JSON to false
	I0919 23:13:27.242293  304826 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6951,"bootTime":1758316656,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:13:27.242391  304826 start.go:140] virtualization: kvm guest
	I0919 23:13:27.245079  304826 out.go:179] * [newest-cni-312465] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:13:27.247014  304826 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:13:27.247038  304826 notify.go:220] Checking for updates...
	I0919 23:13:27.250017  304826 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:13:27.251473  304826 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:13:27.253044  304826 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 23:13:27.254720  304826 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:13:27.256145  304826 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:13:27.258280  304826 config.go:182] Loaded profile config "default-k8s-diff-port-149888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:27.258431  304826 config.go:182] Loaded profile config "embed-certs-403962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:27.258597  304826 config.go:182] Loaded profile config "no-preload-364197": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:27.258738  304826 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:13:27.288883  304826 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:13:27.288975  304826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:13:27.365354  304826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 23:13:27.353196914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:13:27.365506  304826 docker.go:318] overlay module found
	I0919 23:13:27.367763  304826 out.go:179] * Using the docker driver based on user configuration
	I0919 23:13:27.369311  304826 start.go:304] selected driver: docker
	I0919 23:13:27.369334  304826 start.go:918] validating driver "docker" against <nil>
	I0919 23:13:27.369348  304826 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:13:27.370111  304826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:13:27.453927  304826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 23:13:27.442609844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:13:27.454140  304826 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W0919 23:13:27.454193  304826 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0919 23:13:27.454507  304826 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0919 23:13:27.457066  304826 out.go:179] * Using Docker driver with root privileges
	I0919 23:13:27.458665  304826 cni.go:84] Creating CNI manager for ""
	I0919 23:13:27.458745  304826 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:13:27.458755  304826 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 23:13:27.458835  304826 start.go:348] cluster config:
	{Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:13:27.460214  304826 out.go:179] * Starting "newest-cni-312465" primary control-plane node in "newest-cni-312465" cluster
	I0919 23:13:27.461705  304826 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 23:13:27.463479  304826 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:13:27.464969  304826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:13:27.465036  304826 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 23:13:27.465066  304826 cache.go:58] Caching tarball of preloaded images
	I0919 23:13:27.465145  304826 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:13:27.465211  304826 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 23:13:27.465224  304826 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 23:13:27.465373  304826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/config.json ...
	I0919 23:13:27.465402  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/config.json: {Name:mkbe0b2096af0dfcb672d8d5ff02d95192e51311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:27.491881  304826 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:13:27.491906  304826 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:13:27.491929  304826 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:13:27.491965  304826 start.go:360] acquireMachinesLock for newest-cni-312465: {Name:mkdaed0f91b48ccb0806887f4c48e7b6207e9286 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:13:27.492089  304826 start.go:364] duration metric: took 98.144µs to acquireMachinesLock for "newest-cni-312465"
	I0919 23:13:27.492120  304826 start.go:93] Provisioning new machine with config: &{Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:13:27.492213  304826 start.go:125] createHost starting for "" (driver="docker")
	I0919 23:13:25.986611  294587 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 22.501936199s
	I0919 23:13:25.991147  294587 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:13:25.991278  294587 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I0919 23:13:25.991386  294587 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:13:25.991522  294587 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W0919 23:13:25.316055  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:27.322716  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:27.416884  286555 pod_ready.go:104] pod "coredns-66bc5c9577-xg99k" is not "Ready", error: <nil>
	W0919 23:13:29.942623  286555 pod_ready.go:104] pod "coredns-66bc5c9577-xg99k" is not "Ready", error: <nil>
	I0919 23:13:27.494730  304826 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 23:13:27.494955  304826 start.go:159] libmachine.API.Create for "newest-cni-312465" (driver="docker")
	I0919 23:13:27.494995  304826 client.go:168] LocalClient.Create starting
	I0919 23:13:27.495095  304826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 23:13:27.495131  304826 main.go:141] libmachine: Decoding PEM data...
	I0919 23:13:27.495171  304826 main.go:141] libmachine: Parsing certificate...
	I0919 23:13:27.495239  304826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 23:13:27.495270  304826 main.go:141] libmachine: Decoding PEM data...
	I0919 23:13:27.495286  304826 main.go:141] libmachine: Parsing certificate...
	I0919 23:13:27.495751  304826 cli_runner.go:164] Run: docker network inspect newest-cni-312465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 23:13:27.519239  304826 cli_runner.go:211] docker network inspect newest-cni-312465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 23:13:27.519336  304826 network_create.go:284] running [docker network inspect newest-cni-312465] to gather additional debugging logs...
	I0919 23:13:27.519357  304826 cli_runner.go:164] Run: docker network inspect newest-cni-312465
	W0919 23:13:27.542030  304826 cli_runner.go:211] docker network inspect newest-cni-312465 returned with exit code 1
	I0919 23:13:27.542062  304826 network_create.go:287] error running [docker network inspect newest-cni-312465]: docker network inspect newest-cni-312465: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-312465 not found
	I0919 23:13:27.542075  304826 network_create.go:289] output of [docker network inspect newest-cni-312465]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-312465 not found
	
	** /stderr **
	I0919 23:13:27.542219  304826 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:13:27.573077  304826 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-465af21e2d8d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:b5:e0:20:10:48} reservation:<nil>}
	I0919 23:13:27.574029  304826 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd9dd989b948 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:ff:32:51:a1:28} reservation:<nil>}
	I0919 23:13:27.575058  304826 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d5cd7c460be9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:d8:0f:d3:49:b9} reservation:<nil>}
	I0919 23:13:27.576219  304826 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-eeb244b5b4d9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:19:45:7a:f8:43} reservation:<nil>}
	I0919 23:13:27.577101  304826 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-76962f0867a9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ee:d8:43:3c:3c:e2} reservation:<nil>}
	I0919 23:13:27.578259  304826 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cf1dc0}
	I0919 23:13:27.578290  304826 network_create.go:124] attempt to create docker network newest-cni-312465 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0919 23:13:27.578338  304826 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-312465 newest-cni-312465
	I0919 23:13:27.664074  304826 network_create.go:108] docker network newest-cni-312465 192.168.94.0/24 created
	I0919 23:13:27.664108  304826 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-312465" container
	I0919 23:13:27.664204  304826 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 23:13:27.686848  304826 cli_runner.go:164] Run: docker volume create newest-cni-312465 --label name.minikube.sigs.k8s.io=newest-cni-312465 --label created_by.minikube.sigs.k8s.io=true
	I0919 23:13:27.711517  304826 oci.go:103] Successfully created a docker volume newest-cni-312465
	I0919 23:13:27.711624  304826 cli_runner.go:164] Run: docker run --rm --name newest-cni-312465-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-312465 --entrypoint /usr/bin/test -v newest-cni-312465:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 23:13:28.191316  304826 oci.go:107] Successfully prepared a docker volume newest-cni-312465
	I0919 23:13:28.191366  304826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:13:28.191389  304826 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 23:13:28.191481  304826 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-312465:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 23:13:32.076573  304826 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-312465:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.885033462s)
	I0919 23:13:32.076612  304826 kic.go:203] duration metric: took 3.885218568s to extract preloaded images to volume ...
	W0919 23:13:32.076710  304826 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 23:13:32.076743  304826 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 23:13:32.076794  304826 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 23:13:32.149761  304826 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-312465 --name newest-cni-312465 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-312465 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-312465 --network newest-cni-312465 --ip 192.168.94.2 --volume newest-cni-312465:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 23:13:28.139399  294587 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.148131492s
	I0919 23:13:28.449976  294587 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.458741458s
	I0919 23:13:32.493086  294587 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.501778199s
	I0919 23:13:32.510785  294587 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:13:32.524242  294587 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:13:32.539521  294587 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:13:32.539729  294587 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-149888 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:13:32.551224  294587 kubeadm.go:310] [bootstrap-token] Using token: n81jvw.nat4ajoeag176u3n
	I0919 23:13:32.553385  294587 out.go:252]   - Configuring RBAC rules ...
	I0919 23:13:32.553522  294587 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:13:32.557811  294587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:13:32.567024  294587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:13:32.570531  294587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:13:32.576653  294587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:13:32.580237  294587 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:13:32.901145  294587 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:13:33.324739  294587 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:13:33.900632  294587 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:13:33.901573  294587 kubeadm.go:310] 
	I0919 23:13:33.901667  294587 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:13:33.901677  294587 kubeadm.go:310] 
	I0919 23:13:33.901751  294587 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:13:33.901758  294587 kubeadm.go:310] 
	I0919 23:13:33.901777  294587 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:13:33.901831  294587 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:13:33.901895  294587 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:13:33.901902  294587 kubeadm.go:310] 
	I0919 23:13:33.901944  294587 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:13:33.901974  294587 kubeadm.go:310] 
	I0919 23:13:33.902054  294587 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:13:33.902064  294587 kubeadm.go:310] 
	I0919 23:13:33.902143  294587 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:13:33.902266  294587 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:13:33.902331  294587 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:13:33.902339  294587 kubeadm.go:310] 
	I0919 23:13:33.902406  294587 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:13:33.902479  294587 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:13:33.902485  294587 kubeadm.go:310] 
	I0919 23:13:33.902551  294587 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token n81jvw.nat4ajoeag176u3n \
	I0919 23:13:33.902635  294587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 \
	I0919 23:13:33.902655  294587 kubeadm.go:310] 	--control-plane 
	I0919 23:13:33.902661  294587 kubeadm.go:310] 
	I0919 23:13:33.902730  294587 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:13:33.902737  294587 kubeadm.go:310] 
	I0919 23:13:33.902801  294587 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token n81jvw.nat4ajoeag176u3n \
	I0919 23:13:33.902883  294587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 
	I0919 23:13:33.906239  294587 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:13:33.906372  294587 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:13:33.906402  294587 cni.go:84] Creating CNI manager for ""
	I0919 23:13:33.906416  294587 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:13:33.908216  294587 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W0919 23:13:29.819116  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:31.826948  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:34.316941  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	I0919 23:13:32.476430  304826 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Running}}
	I0919 23:13:32.500104  304826 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:13:32.523104  304826 cli_runner.go:164] Run: docker exec newest-cni-312465 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:13:32.578263  304826 oci.go:144] the created container "newest-cni-312465" has a running status.
	I0919 23:13:32.578295  304826 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa...
	I0919 23:13:32.976039  304826 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:13:33.009077  304826 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:13:33.031547  304826 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:13:33.031565  304826 kic_runner.go:114] Args: [docker exec --privileged newest-cni-312465 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:13:33.092603  304826 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:13:33.115283  304826 machine.go:93] provisionDockerMachine start ...
	I0919 23:13:33.115380  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:33.139784  304826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:13:33.140058  304826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I0919 23:13:33.140073  304826 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:13:33.290427  304826 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-312465
	
	I0919 23:13:33.290458  304826 ubuntu.go:182] provisioning hostname "newest-cni-312465"
	I0919 23:13:33.290507  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:33.316275  304826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:13:33.316511  304826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I0919 23:13:33.316526  304826 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-312465 && echo "newest-cni-312465" | sudo tee /etc/hostname
	I0919 23:13:33.472768  304826 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-312465
	
	I0919 23:13:33.472864  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:33.494111  304826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:13:33.494398  304826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I0919 23:13:33.494430  304826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-312465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-312465/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-312465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:13:33.635421  304826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:13:33.635451  304826 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 23:13:33.635494  304826 ubuntu.go:190] setting up certificates
	I0919 23:13:33.635517  304826 provision.go:84] configureAuth start
	I0919 23:13:33.635574  304826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:13:33.655878  304826 provision.go:143] copyHostCerts
	I0919 23:13:33.655961  304826 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 23:13:33.655977  304826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 23:13:33.656058  304826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 23:13:33.656241  304826 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 23:13:33.656255  304826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 23:13:33.656304  304826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 23:13:33.656405  304826 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 23:13:33.656415  304826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 23:13:33.656457  304826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 23:13:33.656554  304826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.newest-cni-312465 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-312465]
	I0919 23:13:34.255292  304826 provision.go:177] copyRemoteCerts
	I0919 23:13:34.255368  304826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:13:34.255413  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.284316  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:34.387988  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 23:13:34.419504  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0919 23:13:34.448496  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:13:34.475661  304826 provision.go:87] duration metric: took 840.126723ms to configureAuth
	I0919 23:13:34.475694  304826 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:13:34.475872  304826 config.go:182] Loaded profile config "newest-cni-312465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:34.475881  304826 machine.go:96] duration metric: took 1.360576611s to provisionDockerMachine
	I0919 23:13:34.475891  304826 client.go:171] duration metric: took 6.980885128s to LocalClient.Create
	I0919 23:13:34.475913  304826 start.go:167] duration metric: took 6.980958258s to libmachine.API.Create "newest-cni-312465"
	I0919 23:13:34.475926  304826 start.go:293] postStartSetup for "newest-cni-312465" (driver="docker")
	I0919 23:13:34.475937  304826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:13:34.475995  304826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:13:34.476029  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.496668  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:34.598095  304826 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:13:34.602045  304826 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:13:34.602091  304826 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:13:34.602104  304826 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:13:34.602111  304826 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:13:34.602121  304826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 23:13:34.602190  304826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 23:13:34.602281  304826 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 23:13:34.602369  304826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:13:34.612660  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:13:34.643262  304826 start.go:296] duration metric: took 167.32169ms for postStartSetup
	I0919 23:13:34.643684  304826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:13:34.663272  304826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/config.json ...
	I0919 23:13:34.663583  304826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:13:34.663633  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.683961  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:34.779205  304826 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:13:34.785070  304826 start.go:128] duration metric: took 7.292838847s to createHost
	I0919 23:13:34.785099  304826 start.go:83] releasing machines lock for "newest-cni-312465", held for 7.292995602s
	I0919 23:13:34.785189  304826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:13:34.807464  304826 ssh_runner.go:195] Run: cat /version.json
	I0919 23:13:34.807503  304826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:13:34.807575  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.807583  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.829219  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:34.829637  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:35.008352  304826 ssh_runner.go:195] Run: systemctl --version
	I0919 23:13:35.013908  304826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:13:35.019269  304826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:13:35.055596  304826 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:13:35.055680  304826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:13:35.090798  304826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:13:35.090825  304826 start.go:495] detecting cgroup driver to use...
	I0919 23:13:35.090862  304826 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:13:35.090925  304826 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 23:13:35.106670  304826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:13:35.120167  304826 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:13:35.120229  304826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:13:35.136229  304826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:13:35.152080  304826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:13:35.229432  304826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:13:35.314675  304826 docker.go:234] disabling docker service ...
	I0919 23:13:35.314746  304826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:13:35.336969  304826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:13:35.352061  304826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:13:35.433841  304826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:13:35.511892  304826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:13:35.525179  304826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:13:35.544848  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:13:35.558556  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:13:35.570787  304826 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:13:35.570874  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:13:35.583714  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:13:35.596563  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:13:35.608811  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:13:35.621274  304826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:13:35.632671  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:13:35.646560  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:13:35.659112  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:13:35.671491  304826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:13:35.681987  304826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:13:35.693319  304826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:13:35.765943  304826 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:13:35.900474  304826 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 23:13:35.900553  304826 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 23:13:35.904775  304826 start.go:563] Will wait 60s for crictl version
	I0919 23:13:35.904838  304826 ssh_runner.go:195] Run: which crictl
	I0919 23:13:35.908969  304826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:13:35.948499  304826 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 23:13:35.948718  304826 ssh_runner.go:195] Run: containerd --version
	I0919 23:13:35.976417  304826 ssh_runner.go:195] Run: containerd --version
	I0919 23:13:36.005950  304826 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 23:13:36.007659  304826 cli_runner.go:164] Run: docker network inspect newest-cni-312465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:13:36.028772  304826 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0919 23:13:36.033878  304826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:13:36.053802  304826 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W0919 23:13:31.971038  286555 pod_ready.go:104] pod "coredns-66bc5c9577-xg99k" is not "Ready", error: <nil>
	W0919 23:13:34.412827  286555 pod_ready.go:104] pod "coredns-66bc5c9577-xg99k" is not "Ready", error: <nil>
	I0919 23:13:36.412824  286555 pod_ready.go:94] pod "coredns-66bc5c9577-xg99k" is "Ready"
	I0919 23:13:36.412859  286555 pod_ready.go:86] duration metric: took 1m14.00590752s for pod "coredns-66bc5c9577-xg99k" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.415705  286555 pod_ready.go:83] waiting for pod "etcd-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.420550  286555 pod_ready.go:94] pod "etcd-no-preload-364197" is "Ready"
	I0919 23:13:36.420580  286555 pod_ready.go:86] duration metric: took 4.848977ms for pod "etcd-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.423284  286555 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.428673  286555 pod_ready.go:94] pod "kube-apiserver-no-preload-364197" is "Ready"
	I0919 23:13:36.428703  286555 pod_ready.go:86] duration metric: took 5.394829ms for pod "kube-apiserver-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.431305  286555 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.610936  286555 pod_ready.go:94] pod "kube-controller-manager-no-preload-364197" is "Ready"
	I0919 23:13:36.610963  286555 pod_ready.go:86] duration metric: took 179.625984ms for pod "kube-controller-manager-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.056701  304826 kubeadm.go:875] updating cluster {Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:13:36.056877  304826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:13:36.057030  304826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:13:36.099591  304826 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:13:36.099615  304826 containerd.go:534] Images already preloaded, skipping extraction
	I0919 23:13:36.099675  304826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:13:36.143373  304826 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:13:36.143413  304826 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:13:36.143421  304826 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 containerd true true} ...
	I0919 23:13:36.143508  304826 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-312465 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:13:36.143562  304826 ssh_runner.go:195] Run: sudo crictl info
	I0919 23:13:36.185797  304826 cni.go:84] Creating CNI manager for ""
	I0919 23:13:36.185828  304826 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:13:36.185843  304826 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0919 23:13:36.185875  304826 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-312465 NodeName:newest-cni-312465 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:13:36.186182  304826 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-312465"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:13:36.186269  304826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:13:36.198096  304826 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:13:36.198546  304826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:13:36.214736  304826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0919 23:13:36.244125  304826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:13:36.270995  304826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I0919 23:13:36.295177  304826 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:13:36.299365  304826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:13:36.313119  304826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:13:36.396378  304826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:13:36.418497  304826 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465 for IP: 192.168.94.2
	I0919 23:13:36.418522  304826 certs.go:194] generating shared ca certs ...
	I0919 23:13:36.418544  304826 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.418705  304826 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 23:13:36.418761  304826 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 23:13:36.418775  304826 certs.go:256] generating profile certs ...
	I0919 23:13:36.418843  304826 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.key
	I0919 23:13:36.418860  304826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.crt with IP's: []
	I0919 23:13:36.531217  304826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.crt ...
	I0919 23:13:36.531247  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.crt: {Name:mk2dead7c7dd4abba877b10a34bd54e0741b0c51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.531436  304826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.key ...
	I0919 23:13:36.531449  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.key: {Name:mkb2dce7d200188d9475ab5211c83bb5dd871bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.531531  304826 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb
	I0919 23:13:36.531547  304826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt.41c88afb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0919 23:13:36.764681  304826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt.41c88afb ...
	I0919 23:13:36.764719  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt.41c88afb: {Name:mkd78eb5b6eba4ac120b530170a9a115208fec96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.764949  304826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb ...
	I0919 23:13:36.764969  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb: {Name:mk23f979dad453c3780b4813b8fc576ea9e94f4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.765077  304826 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt.41c88afb -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt
	I0919 23:13:36.765208  304826 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key
	I0919 23:13:36.765299  304826 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key
	I0919 23:13:36.765323  304826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt with IP's: []
	I0919 23:13:36.811680  286555 pod_ready.go:83] waiting for pod "kube-proxy-t4j4z" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:37.211272  286555 pod_ready.go:94] pod "kube-proxy-t4j4z" is "Ready"
	I0919 23:13:37.211303  286555 pod_ready.go:86] duration metric: took 399.591313ms for pod "kube-proxy-t4j4z" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:37.410092  286555 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:37.810858  286555 pod_ready.go:94] pod "kube-scheduler-no-preload-364197" is "Ready"
	I0919 23:13:37.810890  286555 pod_ready.go:86] duration metric: took 400.769138ms for pod "kube-scheduler-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:37.810907  286555 pod_ready.go:40] duration metric: took 1m15.409243632s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:13:37.871652  286555 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:13:37.873712  286555 out.go:179] * Done! kubectl is now configured to use "no-preload-364197" cluster and "default" namespace by default
	I0919 23:13:33.909671  294587 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 23:13:33.914917  294587 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 23:13:33.914945  294587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 23:13:33.936898  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 23:13:34.176650  294587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:13:34.176752  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:34.176780  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-149888 minikube.k8s.io/updated_at=2025_09_19T23_13_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=default-k8s-diff-port-149888 minikube.k8s.io/primary=true
	I0919 23:13:34.185919  294587 ops.go:34] apiserver oom_adj: -16
	I0919 23:13:34.285582  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:34.786386  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:35.286435  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:35.786591  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:36.286349  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:36.786365  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:37.286088  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:37.786249  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:38.286182  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:38.381035  294587 kubeadm.go:1105] duration metric: took 4.204361703s to wait for elevateKubeSystemPrivileges
	I0919 23:13:38.381076  294587 kubeadm.go:394] duration metric: took 40.106256802s to StartCluster
	I0919 23:13:38.381101  294587 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:38.381208  294587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:13:38.383043  294587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:38.383384  294587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:13:38.383418  294587 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:13:38.383497  294587 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:13:38.383584  294587 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-149888"
	I0919 23:13:38.383599  294587 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-149888"
	I0919 23:13:38.383622  294587 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-149888"
	I0919 23:13:38.383623  294587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-149888"
	I0919 23:13:38.383638  294587 config.go:182] Loaded profile config "default-k8s-diff-port-149888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:38.383654  294587 host.go:66] Checking if "default-k8s-diff-port-149888" exists ...
	I0919 23:13:38.384100  294587 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:13:38.384352  294587 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:13:38.386876  294587 out.go:179] * Verifying Kubernetes components...
	I0919 23:13:38.392366  294587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:13:38.414274  294587 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:13:37.730859  304826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt ...
	I0919 23:13:37.730889  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt: {Name:mka643fd8f3814e682ac62f488ac921be438271e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:37.731102  304826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key ...
	I0919 23:13:37.731122  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key: {Name:mk1e0a6b750f125c5af55b66a1efb72f4537d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:37.731375  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 23:13:37.731416  304826 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 23:13:37.731424  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 23:13:37.731453  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:13:37.731475  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:13:37.731496  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 23:13:37.731531  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:13:37.732086  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:13:37.760205  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:13:37.788964  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:13:37.821273  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:13:37.854511  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 23:13:37.886302  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:13:37.919585  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:13:37.949973  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:13:37.982330  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:13:38.018976  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 23:13:38.049608  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 23:13:38.081886  304826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:13:38.109125  304826 ssh_runner.go:195] Run: openssl version
	I0919 23:13:38.118278  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 23:13:38.133041  304826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 23:13:38.138504  304826 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 23:13:38.138570  304826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 23:13:38.147725  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 23:13:38.160519  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 23:13:38.174178  304826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 23:13:38.179241  304826 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 23:13:38.179303  304826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 23:13:38.188486  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:13:38.203742  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:13:38.216299  304826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:13:38.221016  304826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:13:38.221087  304826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:13:38.229132  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:13:38.242362  304826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:13:38.247181  304826 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:13:38.247247  304826 kubeadm.go:392] StartCluster: {Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:13:38.247335  304826 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 23:13:38.247392  304826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:13:38.289664  304826 cri.go:89] found id: ""
	I0919 23:13:38.289745  304826 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:13:38.300688  304826 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:13:38.314602  304826 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:13:38.314666  304826 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:13:38.328513  304826 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:13:38.328532  304826 kubeadm.go:157] found existing configuration files:
	
	I0919 23:13:38.328573  304826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:13:38.340801  304826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:13:38.340902  304826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:13:38.354142  304826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:13:38.367990  304826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:13:38.368067  304826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:13:38.379710  304826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:13:38.393587  304826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:13:38.393654  304826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:13:38.406457  304826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:13:38.423007  304826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:13:38.423071  304826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:13:38.441889  304826 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:13:38.509349  304826 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:13:38.509425  304826 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:13:38.535354  304826 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:13:38.535436  304826 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:13:38.535487  304826 kubeadm.go:310] OS: Linux
	I0919 23:13:38.535547  304826 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:13:38.535585  304826 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:13:38.535633  304826 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:13:38.535689  304826 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:13:38.535753  304826 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:13:38.535813  304826 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:13:38.535850  304826 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:13:38.535885  304826 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:13:38.621848  304826 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:13:38.622065  304826 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:13:38.622186  304826 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:13:38.630978  304826 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:13:38.415345  294587 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:13:38.415366  294587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:13:38.415418  294587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-149888
	I0919 23:13:38.415735  294587 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-149888"
	I0919 23:13:38.415780  294587 host.go:66] Checking if "default-k8s-diff-port-149888" exists ...
	I0919 23:13:38.416297  294587 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:13:38.445969  294587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/default-k8s-diff-port-149888/id_rsa Username:docker}
	I0919 23:13:38.447208  294587 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:13:38.447231  294587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:13:38.447297  294587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-149888
	I0919 23:13:38.480457  294587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/default-k8s-diff-port-149888/id_rsa Username:docker}
	I0919 23:13:38.540300  294587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:13:38.557619  294587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:13:38.594341  294587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:13:38.630764  294587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:13:38.799085  294587 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0919 23:13:38.800978  294587 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-149888" to be "Ready" ...
	I0919 23:13:38.812605  294587 node_ready.go:49] node "default-k8s-diff-port-149888" is "Ready"
	I0919 23:13:38.812642  294587 node_ready.go:38] duration metric: took 11.622008ms for node "default-k8s-diff-port-149888" to be "Ready" ...
	I0919 23:13:38.812666  294587 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:13:38.812750  294587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:13:39.036443  294587 api_server.go:72] duration metric: took 652.97537ms to wait for apiserver process to appear ...
	I0919 23:13:39.036471  294587 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:13:39.036490  294587 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0919 23:13:39.043372  294587 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0919 23:13:39.047190  294587 api_server.go:141] control plane version: v1.34.0
	I0919 23:13:39.047226  294587 api_server.go:131] duration metric: took 10.747839ms to wait for apiserver health ...
	I0919 23:13:39.047237  294587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:13:39.049788  294587 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W0919 23:13:36.317685  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:38.318647  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	I0919 23:13:39.819987  295194 pod_ready.go:94] pod "coredns-66bc5c9577-t6v26" is "Ready"
	I0919 23:13:39.820015  295194 pod_ready.go:86] duration metric: took 37.509771492s for pod "coredns-66bc5c9577-t6v26" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.822985  295194 pod_ready.go:83] waiting for pod "etcd-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.827553  295194 pod_ready.go:94] pod "etcd-embed-certs-403962" is "Ready"
	I0919 23:13:39.827574  295194 pod_ready.go:86] duration metric: took 4.567201ms for pod "etcd-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.829949  295194 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.834015  295194 pod_ready.go:94] pod "kube-apiserver-embed-certs-403962" is "Ready"
	I0919 23:13:39.834041  295194 pod_ready.go:86] duration metric: took 4.068136ms for pod "kube-apiserver-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.836103  295194 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:40.014492  295194 pod_ready.go:94] pod "kube-controller-manager-embed-certs-403962" is "Ready"
	I0919 23:13:40.014519  295194 pod_ready.go:86] duration metric: took 178.389529ms for pod "kube-controller-manager-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:40.214694  295194 pod_ready.go:83] waiting for pod "kube-proxy-5tf2s" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:40.614193  295194 pod_ready.go:94] pod "kube-proxy-5tf2s" is "Ready"
	I0919 23:13:40.614222  295194 pod_ready.go:86] duration metric: took 399.49287ms for pod "kube-proxy-5tf2s" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:40.814999  295194 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:41.214398  295194 pod_ready.go:94] pod "kube-scheduler-embed-certs-403962" is "Ready"
	I0919 23:13:41.214429  295194 pod_ready.go:86] duration metric: took 399.403485ms for pod "kube-scheduler-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:41.214439  295194 pod_ready.go:40] duration metric: took 38.913620351s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:13:41.267599  295194 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:13:41.270700  295194 out.go:179] * Done! kubectl is now configured to use "embed-certs-403962" cluster and "default" namespace by default
	I0919 23:13:38.634403  304826 out.go:252]   - Generating certificates and keys ...
	I0919 23:13:38.634645  304826 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:13:38.634729  304826 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:13:38.733514  304826 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:13:39.062476  304826 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:13:39.133445  304826 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:13:39.439953  304826 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:13:39.872072  304826 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:13:39.872221  304826 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-312465] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0919 23:13:39.972922  304826 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:13:39.973129  304826 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-312465] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0919 23:13:40.957549  304826 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:13:41.144394  304826 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:13:41.426739  304826 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:13:41.426849  304826 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:13:41.554555  304826 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:13:41.608199  304826 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:13:41.645796  304826 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:13:41.778911  304826 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:13:41.900942  304826 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:13:41.901396  304826 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:13:41.905522  304826 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:13:41.907209  304826 out.go:252]   - Booting up control plane ...
	I0919 23:13:41.907335  304826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:13:41.907460  304826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:13:41.907982  304826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:13:41.919781  304826 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:13:41.919920  304826 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:13:41.926298  304826 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:13:41.926476  304826 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:13:41.926547  304826 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:13:42.017500  304826 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:13:42.017660  304826 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:13:39.052217  294587 addons.go:514] duration metric: took 668.711417ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 23:13:39.053005  294587 system_pods.go:59] 9 kube-system pods found
	I0919 23:13:39.053044  294587 system_pods.go:61] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.053057  294587 system_pods.go:61] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.053070  294587 system_pods.go:61] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:13:39.053085  294587 system_pods.go:61] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 23:13:39.053092  294587 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:39.053105  294587 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:39.053113  294587 system_pods.go:61] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:39.053135  294587 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:39.053144  294587 system_pods.go:61] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:39.053189  294587 system_pods.go:74] duration metric: took 5.910482ms to wait for pod list to return data ...
	I0919 23:13:39.053205  294587 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:13:39.055828  294587 default_sa.go:45] found service account: "default"
	I0919 23:13:39.055846  294587 default_sa.go:55] duration metric: took 2.635401ms for default service account to be created ...
	I0919 23:13:39.055855  294587 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:13:39.058754  294587 system_pods.go:86] 9 kube-system pods found
	I0919 23:13:39.058787  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.058797  294587 system_pods.go:89] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.058807  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:13:39.058821  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 23:13:39.058830  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:39.058841  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:39.058846  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:39.058852  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:39.058857  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:39.058878  294587 retry.go:31] will retry after 270.945985ms: missing components: kube-dns, kube-proxy
	I0919 23:13:39.304737  294587 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-149888" context rescaled to 1 replicas
	I0919 23:13:39.337213  294587 system_pods.go:86] 9 kube-system pods found
	I0919 23:13:39.337253  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.337265  294587 system_pods.go:89] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.337271  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:39.337278  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:39.337284  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:39.337290  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:39.337298  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:39.337305  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:39.337314  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:39.337335  294587 retry.go:31] will retry after 357.220825ms: missing components: kube-dns
	I0919 23:13:39.698915  294587 system_pods.go:86] 9 kube-system pods found
	I0919 23:13:39.698949  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.698958  294587 system_pods.go:89] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.698966  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:39.698975  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:39.698980  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:39.698987  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:39.698995  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:39.699002  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:39.699013  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:39.699035  294587 retry.go:31] will retry after 375.514546ms: missing components: kube-dns
	I0919 23:13:40.079067  294587 system_pods.go:86] 9 kube-system pods found
	I0919 23:13:40.079105  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:40.079117  294587 system_pods.go:89] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:40.079125  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:40.079131  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:40.079136  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:40.079141  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:40.079148  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:40.079191  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:40.079199  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:40.079216  294587 retry.go:31] will retry after 558.632768ms: missing components: kube-dns
	I0919 23:13:40.643894  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:40.643930  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:40.643938  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:40.643947  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:40.643953  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:40.643960  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:40.643970  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:40.643983  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:40.643989  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:40.644010  294587 retry.go:31] will retry after 761.400913ms: missing components: kube-dns
	I0919 23:13:41.410199  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:41.410236  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:41.410250  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:41.410257  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:41.410263  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:41.410269  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:41.410277  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:41.410285  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:41.410291  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:41.410312  294587 retry.go:31] will retry after 629.477098ms: missing components: kube-dns
	I0919 23:13:42.043664  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:42.043705  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:42.043715  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:42.043724  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:42.043729  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:42.043739  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:42.043747  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:42.043753  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:42.043762  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:42.043778  294587 retry.go:31] will retry after 1.069085397s: missing components: kube-dns
	I0919 23:13:43.117253  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:43.117290  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:43.117297  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:43.117305  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:43.117308  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:43.117312  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:43.117318  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:43.117322  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:43.117326  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:43.117339  294587 retry.go:31] will retry after 1.031094562s: missing components: kube-dns
	I0919 23:13:44.153419  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:44.153454  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:44.153460  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:44.153467  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:44.153472  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:44.153475  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:44.153480  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:44.153484  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:44.153487  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:44.153499  294587 retry.go:31] will retry after 1.715155668s: missing components: kube-dns
	I0919 23:13:45.873736  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:45.873776  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:45.873786  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:45.873794  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:45.873800  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:45.873805  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:45.873820  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:45.873826  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:45.873832  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:45.873863  294587 retry.go:31] will retry after 2.128059142s: missing components: kube-dns
	I0919 23:13:48.006564  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:48.006602  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:48.006610  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:48.006618  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:48.006624  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:48.006630  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:48.006635  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:48.006640  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:48.006647  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:48.006662  294587 retry.go:31] will retry after 1.782367114s: missing components: kube-dns
	I0919 23:13:50.518700  304826 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 8.501106835s
	I0919 23:13:50.522818  304826 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:13:50.522974  304826 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I0919 23:13:50.523114  304826 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:13:50.523256  304826 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:13:49.793148  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:49.793210  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:49.793217  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:49.793223  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:49.793229  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:49.793232  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:49.793243  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:49.793246  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:49.793251  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:49.793265  294587 retry.go:31] will retry after 2.338572613s: missing components: kube-dns
	I0919 23:13:52.140344  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:52.140388  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:52.140397  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:52.140407  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:52.140413  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:52.140419  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:52.140428  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:52.140435  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:52.140442  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:52.140471  294587 retry.go:31] will retry after 3.086457646s: missing components: kube-dns
	I0919 23:13:52.884946  304826 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.362051829s
	I0919 23:13:53.462893  304826 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.939923299s
	I0919 23:13:55.526762  304826 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.001364253s
	I0919 23:13:55.539011  304826 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:13:55.554378  304826 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:13:55.568644  304826 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:13:55.568919  304826 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-312465 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:13:55.589739  304826 kubeadm.go:310] [bootstrap-token] Using token: jlnn4o.ezmdj0dkuh5aygdp
	I0919 23:13:55.597493  304826 out.go:252]   - Configuring RBAC rules ...
	I0919 23:13:55.597663  304826 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:13:55.605517  304826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:13:55.615421  304826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:13:55.619862  304826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:13:55.623882  304826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:13:55.627801  304826 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:13:55.932128  304826 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:13:56.356624  304826 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:13:56.933510  304826 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:13:56.934045  304826 kubeadm.go:310] 
	I0919 23:13:56.934263  304826 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:13:56.934299  304826 kubeadm.go:310] 
	I0919 23:13:56.934450  304826 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:13:56.934513  304826 kubeadm.go:310] 
	I0919 23:13:56.934545  304826 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:13:56.934630  304826 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:13:56.934686  304826 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:13:56.934691  304826 kubeadm.go:310] 
	I0919 23:13:56.934758  304826 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:13:56.934770  304826 kubeadm.go:310] 
	I0919 23:13:56.934825  304826 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:13:56.934831  304826 kubeadm.go:310] 
	I0919 23:13:56.934891  304826 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:13:56.934986  304826 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:13:56.935060  304826 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:13:56.935065  304826 kubeadm.go:310] 
	I0919 23:13:56.935176  304826 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:13:56.935268  304826 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:13:56.935275  304826 kubeadm.go:310] 
	I0919 23:13:56.935375  304826 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jlnn4o.ezmdj0dkuh5aygdp \
	I0919 23:13:56.935496  304826 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 \
	I0919 23:13:56.935523  304826 kubeadm.go:310] 	--control-plane 
	I0919 23:13:56.935529  304826 kubeadm.go:310] 
	I0919 23:13:56.941214  304826 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:13:56.941255  304826 kubeadm.go:310] 
	I0919 23:13:56.941369  304826 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jlnn4o.ezmdj0dkuh5aygdp \
	I0919 23:13:56.941535  304826 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 
	I0919 23:13:56.944009  304826 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:13:56.944144  304826 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:13:56.944225  304826 cni.go:84] Creating CNI manager for ""
	I0919 23:13:56.944236  304826 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:13:56.946333  304826 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	4563e410dc0f2       6e38f40d628db       11 seconds ago       Running             storage-provisioner         3                   a66acbb7b7fb7       storage-provisioner
	a290868582b05       523cad1a4df73       28 seconds ago       Exited              dashboard-metrics-scraper   2                   5cc7b3904cba5       dashboard-metrics-scraper-6ffb444bf9-cnlgb
	47aa58544505a       07655ddf2eebe       43 seconds ago       Running             kubernetes-dashboard        0                   e2968b557b4b7       kubernetes-dashboard-855c9754f9-9hzq9
	d85d77e0cb950       409467f978b4a       55 seconds ago       Running             kindnet-cni                 1                   dce54f2503eb1       kindnet-cfvvr
	34e7809edb448       56cc512116c8f       55 seconds ago       Running             busybox                     1                   454b71d72e776       busybox
	cc833990e602c       52546a367cc9e       55 seconds ago       Running             coredns                     1                   1a6db11ddddab       coredns-66bc5c9577-t6v26
	ea70f76b17b6e       6e38f40d628db       55 seconds ago       Exited              storage-provisioner         2                   a66acbb7b7fb7       storage-provisioner
	3a603aaa7a1bc       df0860106674d       55 seconds ago       Running             kube-proxy                  3                   4c0f4703a4e72       kube-proxy-5tf2s
	685fd68b08faf       a0af72f2ec6d6       About a minute ago   Running             kube-controller-manager     1                   e0f9c4a55d1c8       kube-controller-manager-embed-certs-403962
	483f16593a289       46169d968e920       About a minute ago   Running             kube-scheduler              1                   e656d3903d6c0       kube-scheduler-embed-certs-403962
	fff89acc2e74b       90550c43ad2bc       About a minute ago   Running             kube-apiserver              1                   adb7d9006058d       kube-apiserver-embed-certs-403962
	e01a7e7e7cd1e       5f1f5298c888d       About a minute ago   Running             etcd                        1                   f46f404ac9d28       etcd-embed-certs-403962
	56145aab088b8       56cc512116c8f       About a minute ago   Exited              busybox                     0                   c877c65b7d0e6       busybox
	5a6738588eda9       52546a367cc9e       About a minute ago   Exited              coredns                     0                   5dfea961e621a       coredns-66bc5c9577-t6v26
	c5049fc2e8ac9       df0860106674d       2 minutes ago        Exited              kube-proxy                  2                   53519fbdb5fc0       kube-proxy-5tf2s
	6044b48856573       409467f978b4a       2 minutes ago        Exited              kindnet-cni                 0                   8ce059f3c7b8d       kindnet-cfvvr
	432944df07afe       a0af72f2ec6d6       2 minutes ago        Exited              kube-controller-manager     0                   a58b63567c0d4       kube-controller-manager-embed-certs-403962
	3a9a8f6fc34ea       46169d968e920       2 minutes ago        Exited              kube-scheduler              0                   5a88c51511690       kube-scheduler-embed-certs-403962
	bfd145fe58ffd       90550c43ad2bc       2 minutes ago        Exited              kube-apiserver              0                   cef1a795a0d60       kube-apiserver-embed-certs-403962
	cf7db7dc6b4de       5f1f5298c888d       2 minutes ago        Exited              etcd                        0                   4126d76c28cb6       etcd-embed-certs-403962
	
	
	==> containerd <==
	Sep 19 23:13:31 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:31.905077766Z" level=info msg="RemoveContainer for \"47d588788caa22f91d6357a93433052f87a4cdcdb75180abfd6765e87ca7aec1\""
	Sep 19 23:13:32 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:32.056461308Z" level=info msg="RemoveContainer for \"47d588788caa22f91d6357a93433052f87a4cdcdb75180abfd6765e87ca7aec1\" returns successfully"
	Sep 19 23:13:32 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:32.367922611Z" level=info msg="received exit event container_id:\"ea70f76b17b6e8157ddbc228b4f91d0b0061b96d5691b7517b5b27a70e7700f0\"  id:\"ea70f76b17b6e8157ddbc228b4f91d0b0061b96d5691b7517b5b27a70e7700f0\"  pid:1728  exit_status:1  exited_at:{seconds:1758323612  nanos:367541224}"
	Sep 19 23:13:32 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:32.401664861Z" level=info msg="shim disconnected" id=ea70f76b17b6e8157ddbc228b4f91d0b0061b96d5691b7517b5b27a70e7700f0 namespace=k8s.io
	Sep 19 23:13:32 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:32.401710478Z" level=warning msg="cleaning up after shim disconnected" id=ea70f76b17b6e8157ddbc228b4f91d0b0061b96d5691b7517b5b27a70e7700f0 namespace=k8s.io
	Sep 19 23:13:32 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:32.401722276Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 19 23:13:32 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:32.910172448Z" level=info msg="RemoveContainer for \"899284dcc3e2b38e51ef69b1f80f7dabb5d3fc618dbb1e3e873a7290043469f8\""
	Sep 19 23:13:32 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:32.915306549Z" level=info msg="RemoveContainer for \"899284dcc3e2b38e51ef69b1f80f7dabb5d3fc618dbb1e3e873a7290043469f8\" returns successfully"
	Sep 19 23:13:40 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:40.661397517Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 23:13:40 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:40.698136934Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 19 23:13:40 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:40.699821257Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 19 23:13:40 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:40.699858629Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 19 23:13:46 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:46.662730046Z" level=info msg="CreateContainer within sandbox \"a66acbb7b7fb7d81cf63f6dfe0585062ae52aa8874150698912e1fe48bff0282\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:3,}"
	Sep 19 23:13:46 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:46.679231848Z" level=info msg="CreateContainer within sandbox \"a66acbb7b7fb7d81cf63f6dfe0585062ae52aa8874150698912e1fe48bff0282\" for &ContainerMetadata{Name:storage-provisioner,Attempt:3,} returns container id \"4563e410dc0f252652afbc6f753a6803481cab963203e75bf937ef3a03215571\""
	Sep 19 23:13:46 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:46.680132476Z" level=info msg="StartContainer for \"4563e410dc0f252652afbc6f753a6803481cab963203e75bf937ef3a03215571\""
	Sep 19 23:13:46 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:46.736304005Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 23:13:46 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:46.742271462Z" level=info msg="StartContainer for \"4563e410dc0f252652afbc6f753a6803481cab963203e75bf937ef3a03215571\" returns successfully"
	Sep 19 23:13:55 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:55.846110567Z" level=info msg="StopPodSandbox for \"44e2025708a7265d2c1d0ac8cd01bd2f50bb3ad118af8efc456bdbd4676d47e0\""
	Sep 19 23:13:55 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:55.846728703Z" level=info msg="TearDown network for sandbox \"44e2025708a7265d2c1d0ac8cd01bd2f50bb3ad118af8efc456bdbd4676d47e0\" successfully"
	Sep 19 23:13:55 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:55.846982825Z" level=info msg="StopPodSandbox for \"44e2025708a7265d2c1d0ac8cd01bd2f50bb3ad118af8efc456bdbd4676d47e0\" returns successfully"
	Sep 19 23:13:55 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:55.848635343Z" level=info msg="RemovePodSandbox for \"44e2025708a7265d2c1d0ac8cd01bd2f50bb3ad118af8efc456bdbd4676d47e0\""
	Sep 19 23:13:55 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:55.848687445Z" level=info msg="Forcibly stopping sandbox \"44e2025708a7265d2c1d0ac8cd01bd2f50bb3ad118af8efc456bdbd4676d47e0\""
	Sep 19 23:13:55 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:55.848794316Z" level=info msg="TearDown network for sandbox \"44e2025708a7265d2c1d0ac8cd01bd2f50bb3ad118af8efc456bdbd4676d47e0\" successfully"
	Sep 19 23:13:55 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:55.853518713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44e2025708a7265d2c1d0ac8cd01bd2f50bb3ad118af8efc456bdbd4676d47e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 19 23:13:55 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:55.853606013Z" level=info msg="RemovePodSandbox \"44e2025708a7265d2c1d0ac8cd01bd2f50bb3ad118af8efc456bdbd4676d47e0\" returns successfully"
	
	
	==> coredns [5a6738588eda9670758d2c95ddd575f0d3bbe663fccc84269735b439c58d2240] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55865 - 15726 "HINFO IN 8224844395446692356.8669349578143619784. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019682314s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cc833990e602ca5b705b8aa5ac46b56807fa0fadf23b708a7d23265bfeb92d8f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45444 - 21218 "HINFO IN 2590847814099361813.5910761736158485681. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020611972s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-403962
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-403962
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=embed-certs-403962
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_11_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:11:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-403962
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:13:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:13:31 +0000   Fri, 19 Sep 2025 23:11:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:13:31 +0000   Fri, 19 Sep 2025 23:11:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:13:31 +0000   Fri, 19 Sep 2025 23:11:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:13:31 +0000   Fri, 19 Sep 2025 23:11:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-403962
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 0835b9f66bce444bab3315337fb85fb5
	  System UUID:                01ab2205-6958-4b6a-b331-e4029a4f9b37
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-t6v26                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m40s
	  kube-system                 etcd-embed-certs-403962                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m46s
	  kube-system                 kindnet-cfvvr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m41s
	  kube-system                 kube-apiserver-embed-certs-403962             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m46s
	  kube-system                 kube-controller-manager-embed-certs-403962    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m46s
	  kube-system                 kube-proxy-5tf2s                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 kube-scheduler-embed-certs-403962             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m46s
	  kube-system                 metrics-server-746fcd58dc-g24nt               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         81s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-cnlgb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9hzq9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m21s                  kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  Starting                 2m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m52s (x8 over 2m52s)  kubelet          Node embed-certs-403962 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s (x8 over 2m52s)  kubelet          Node embed-certs-403962 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x7 over 2m52s)  kubelet          Node embed-certs-403962 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m46s                  kubelet          Node embed-certs-403962 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m46s                  kubelet          Node embed-certs-403962 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m46s                  kubelet          Node embed-certs-403962 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m46s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m42s                  node-controller  Node embed-certs-403962 event: Registered Node embed-certs-403962 in Controller
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node embed-certs-403962 status is now: NodeHasSufficientMemory
	  Normal  Starting                 62s                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node embed-certs-403962 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x7 over 62s)      kubelet          Node embed-certs-403962 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           54s                    node-controller  Node embed-certs-403962 event: Registered Node embed-certs-403962 in Controller
	  Normal  Starting                 3s                     kubelet          Starting kubelet.
	  Normal  Starting                 2s                     kubelet          Starting kubelet.
	  Normal  Starting                 1s                     kubelet          Starting kubelet.
	
	
	==> dmesg <==
	[  +0.995983] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.504855] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500830] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.994952] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.505945] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501588] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.993278] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.507907] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500995] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.992251] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.509112] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501500] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989799] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.510643] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501653] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989383] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.511830] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500482] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989056] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.512088] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500241] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501649] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501291] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501422] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[Sep19 23:13] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	
	
	==> etcd [cf7db7dc6b4def457de8f1757d1f052269f241085897a0915ab05475e9007382] <==
	{"level":"warn","ts":"2025-09-19T23:11:09.188665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.198456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.205371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.213033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.219708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.226288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.232685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.240509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.247778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.255026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.262017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.269447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.276733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.284466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.292397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.299271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.306899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.314207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.322104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.329501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.342457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.346477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.353073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.360047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.416438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33376","server-name":"","error":"EOF"}
	
	
	==> etcd [e01a7e7e7cd1ef798fd87f8f0fdeba66a7500cc192b790f0549f24a37ef33988] <==
	{"level":"warn","ts":"2025-09-19T23:13:00.023924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.031706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.040492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.051616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.065636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.074997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.081805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.088472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.095349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.109843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.118599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.126611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.135370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.142426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.157573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.164474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.172034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59276","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T23:13:12.097690Z","caller":"traceutil/trace.go:172","msg":"trace[232139012] transaction","detail":"{read_only:false; response_revision:731; number_of_response:1; }","duration":"258.463892ms","start":"2025-09-19T23:13:11.839200Z","end":"2025-09-19T23:13:12.097664Z","steps":["trace[232139012] 'process raft request'  (duration: 226.719442ms)","trace[232139012] 'compare'  (duration: 31.635495ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:13:14.435808Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.147551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-t6v26\" limit:1 ","response":"range_response_count:1 size:5793"}
	{"level":"info","ts":"2025-09-19T23:13:14.435925Z","caller":"traceutil/trace.go:172","msg":"trace[295058405] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-t6v26; range_end:; response_count:1; response_revision:734; }","duration":"123.287502ms","start":"2025-09-19T23:13:14.312618Z","end":"2025-09-19T23:13:14.435906Z","steps":["trace[295058405] 'range keys from in-memory index tree'  (duration: 122.994923ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:13:30.060102Z","caller":"traceutil/trace.go:172","msg":"trace[1620432508] transaction","detail":"{read_only:false; response_revision:765; number_of_response:1; }","duration":"115.142546ms","start":"2025-09-19T23:13:29.944937Z","end":"2025-09-19T23:13:30.060080Z","steps":["trace[1620432508] 'process raft request'  (duration: 115.007955ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:13:30.170551Z","caller":"traceutil/trace.go:172","msg":"trace[2094453612] transaction","detail":"{read_only:false; response_revision:768; number_of_response:1; }","duration":"103.238707ms","start":"2025-09-19T23:13:30.067295Z","end":"2025-09-19T23:13:30.170534Z","steps":["trace[2094453612] 'process raft request'  (duration: 103.175902ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:13:30.170554Z","caller":"traceutil/trace.go:172","msg":"trace[2055278428] transaction","detail":"{read_only:false; response_revision:767; number_of_response:1; }","duration":"104.13866ms","start":"2025-09-19T23:13:30.066398Z","end":"2025-09-19T23:13:30.170536Z","steps":["trace[2055278428] 'process raft request'  (duration: 79.101521ms)","trace[2055278428] 'compare'  (duration: 24.840346ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:13:31.720121Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.909334ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638355411949758898 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-muifrzavafhuho37txlpqynjom\" mod_revision:756 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-muifrzavafhuho37txlpqynjom\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-muifrzavafhuho37txlpqynjom\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:13:31.720238Z","caller":"traceutil/trace.go:172","msg":"trace[395202177] transaction","detail":"{read_only:false; response_revision:771; number_of_response:1; }","duration":"185.400323ms","start":"2025-09-19T23:13:31.534819Z","end":"2025-09-19T23:13:31.720219Z","steps":["trace[395202177] 'process raft request'  (duration: 82.644458ms)","trace[395202177] 'compare'  (duration: 101.791309ms)"],"step_count":2}
	
	
	==> kernel <==
	 23:13:58 up  1:56,  0 users,  load average: 4.93, 3.90, 2.42
	Linux embed-certs-403962 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6044b48856573e666bf5ceb3935f92c8f868de2441a0ec3b09843b9492ce7bbf] <==
	I0919 23:11:19.174496       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0919 23:11:19.174681       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:11:19.174701       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:11:19.174728       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:11:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:11:19.369127       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:11:19.369149       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:11:19.369198       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:11:19.369379       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0919 23:11:49.369352       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0919 23:11:49.370693       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0919 23:11:49.370719       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0919 23:11:49.375335       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0919 23:11:50.769365       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:11:50.769493       1 metrics.go:72] Registering metrics
	I0919 23:11:50.769771       1 controller.go:711] "Syncing nftables rules"
	I0919 23:11:59.368671       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:11:59.368745       1 main.go:301] handling current node
	I0919 23:12:09.378230       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:12:09.378265       1 main.go:301] handling current node
	I0919 23:12:19.374248       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:12:19.374302       1 main.go:301] handling current node
	I0919 23:12:29.369279       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:12:29.369316       1 main.go:301] handling current node
	
	
	==> kindnet [d85d77e0cb95093df3a320c88fc83229cb7eea4b4c40ff52eafa9f2ab25a30d9] <==
	I0919 23:13:02.907547       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0919 23:13:02.908343       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0919 23:13:02.908553       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:13:02.908581       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:13:02.908612       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:13:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:13:03.204600       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:13:03.204621       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:13:03.204632       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:13:03.401837       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0919 23:13:03.804720       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:13:03.804753       1 metrics.go:72] Registering metrics
	I0919 23:13:03.804816       1 controller.go:711] "Syncing nftables rules"
	I0919 23:13:13.204462       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:13:13.204536       1 main.go:301] handling current node
	I0919 23:13:23.210251       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:13:23.210291       1 main.go:301] handling current node
	I0919 23:13:33.204317       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:13:33.204353       1 main.go:301] handling current node
	I0919 23:13:43.205277       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:13:43.205319       1 main.go:301] handling current node
	I0919 23:13:53.209554       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:13:53.209597       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bfd145fe58ffdf1763f41676fcd29f8a3ce82593ddd0832d7885c98305dfe78c] <==
	I0919 23:11:17.201114       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 23:11:17.602735       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 23:11:17.608103       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 23:11:17.952410       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 23:12:20.145022       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 23:12:31.927984       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 23:12:36.265781       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:48838: use of closed network connection
	I0919 23:12:37.057010       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 23:12:37.063803       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:12:37.063884       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0919 23:12:37.063959       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0919 23:12:37.154609       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.109.206.124"}
	W0919 23:12:37.165285       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:12:37.165340       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0919 23:12:37.171698       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:12:37.171767       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [fff89acc2e74b902f9e0c95e662765f6072a12aba84c1a2088fb3a0255f2b922] <==
	I0919 23:13:01.787141       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 23:13:01.898573       1 handler_proxy.go:99] no RequestInfo found in the context
	W0919 23:13:01.898626       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:13:01.898708       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:13:01.898731       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0919 23:13:01.898622       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 23:13:01.900097       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 23:13:04.617389       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 23:13:04.669450       1 controller.go:667] quota admission added evaluator for: endpoints
	I0919 23:13:04.866591       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 23:13:04.866591       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	{"level":"warn","ts":"2025-09-19T23:13:58.163404Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002cb2d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0919 23:13:58.163527       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: client disconnected" logger="UnhandledError"
	E0919 23:13:58.163562       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"client disconnected\"}: client disconnected" logger="UnhandledError"
	E0919 23:13:58.163535       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	E0919 23:13:58.163742       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="GET" URI="/api/v1/nodes/embed-certs-403962" auditID="a3a0ca7d-b48f-489b-9edd-6cafc265b21b"
	E0919 23:13:58.165302       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:13:58.165383       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 23:13:58.165317       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:13:58.166616       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:13:58.166722       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.929934ms" method="GET" path="/api/v1/nodes/embed-certs-403962" result=null
	E0919 23:13:58.166783       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.91751ms" method="GET" path="/apis/storage.k8s.io/v1/csinodes/embed-certs-403962" result=null
	
	
	==> kube-controller-manager [432944df07afe4fa031e21fb600de4c622c12827f2cba9267a85f5f1b177d65c] <==
	I0919 23:11:16.846520       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 23:11:16.846534       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 23:11:16.846667       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 23:11:16.847613       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 23:11:16.847629       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 23:11:16.847708       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 23:11:16.847709       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 23:11:16.847732       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0919 23:11:16.847788       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 23:11:16.847854       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 23:11:16.847945       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 23:11:16.848194       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 23:11:16.848583       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0919 23:11:16.849130       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 23:11:16.849202       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 23:11:16.849419       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 23:11:16.849517       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 23:11:16.849590       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-403962"
	I0919 23:11:16.849640       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 23:11:16.851859       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 23:11:16.852089       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0919 23:11:16.852259       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:11:16.853380       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 23:11:16.856890       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 23:11:16.868196       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [685fd68b08faf09a7264d900382bc219699f762de534a16181d8d0716c2a76da] <==
	I0919 23:13:04.263435       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 23:13:04.263464       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:13:04.263505       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 23:13:04.263519       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 23:13:04.263642       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0919 23:13:04.263823       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0919 23:13:04.263900       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 23:13:04.266086       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 23:13:04.269342       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0919 23:13:04.269401       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:13:04.269438       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 23:13:04.270699       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 23:13:04.270741       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 23:13:04.270802       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 23:13:04.273161       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0919 23:13:04.279533       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 23:13:04.279700       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 23:13:04.279842       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-403962"
	I0919 23:13:04.279915       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 23:13:04.282951       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 23:13:04.285224       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 23:13:04.286309       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 23:13:04.305110       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0919 23:13:34.277940       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:13:34.315857       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [3a603aaa7a1bcf96dec283c58dc48ba29ca8f42b1a92797b9ceba2493c3ab89c] <==
	I0919 23:13:02.364625       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:13:02.446950       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:13:02.547138       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:13:02.547195       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0919 23:13:02.547274       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:13:02.574325       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:13:02.574410       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:13:02.583670       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:13:02.584227       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:13:02.584255       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:13:02.585580       1 config.go:200] "Starting service config controller"
	I0919 23:13:02.585616       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:13:02.585651       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:13:02.586246       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:13:02.586604       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:13:02.586631       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:13:02.587219       1 config.go:309] "Starting node config controller"
	I0919 23:13:02.587236       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:13:02.587244       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:13:02.687240       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 23:13:02.687263       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 23:13:02.688408       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [c5049fc2e8ac91137c74843b7caa1255b1066b4f520bc630221be98343ed16fe] <==
	I0919 23:11:36.657767       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:11:36.727117       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:11:36.827661       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:11:36.827709       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0919 23:11:36.827818       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:11:36.852020       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:11:36.852071       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:11:36.858348       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:11:36.858858       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:11:36.858899       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:11:36.860507       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:11:36.860511       1 config.go:200] "Starting service config controller"
	I0919 23:11:36.860549       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:11:36.860561       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:11:36.860561       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:11:36.860592       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:11:36.860675       1 config.go:309] "Starting node config controller"
	I0919 23:11:36.860686       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:11:36.860692       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:11:36.960789       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 23:11:36.960823       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 23:11:36.960852       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3a9a8f6fc34ea35cec42c4a31ceded3d6a9e79dee3522f4ae207c04308111533] <==
	E0919 23:11:09.877917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 23:11:09.878218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 23:11:09.878431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 23:11:09.881043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 23:11:09.881208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 23:11:09.881420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 23:11:09.881516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 23:11:09.881930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 23:11:10.708036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 23:11:10.760670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 23:11:10.855505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 23:11:10.868279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 23:11:10.986598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 23:11:11.094498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 23:11:11.139674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 23:11:11.144274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 23:11:11.229400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 23:11:11.250284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 23:11:11.307607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 23:11:11.309384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 23:11:11.330528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 23:11:11.339083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 23:11:11.346994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 23:11:11.367514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I0919 23:11:12.672956       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [483f16593a289462268696d97f733dfd8ff651f4c461db8d3a0613e0aaa05534] <==
	I0919 23:12:58.147568       1 serving.go:386] Generated self-signed cert in-memory
	W0919 23:13:00.828331       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 23:13:00.828369       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 23:13:00.828381       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 23:13:00.828390       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 23:13:00.872292       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 23:13:00.872322       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:13:00.877323       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:13:00.877385       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:13:00.879018       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 23:13:00.879089       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 23:13:00.977614       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: I0919 23:13:58.142364    3130 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: E0919 23:13:58.142429    3130 plugins.go:580] "Error initializing dynamic plugin prober" err="error initializing watcher: too many open files"
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: I0919 23:13:58.143198    3130 server.go:1262] "Started kubelet"
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: I0919 23:13:58.143428    3130 server.go:180] "Starting to listen" address="0.0.0.0" port=10250
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: I0919 23:13:58.143827    3130 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: I0919 23:13:58.143912    3130 server_v1.go:49] "podresources" method="list" useActivePods=true
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: I0919 23:13:58.144197    3130 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: I0919 23:13:58.144735    3130 server.go:310] "Adding debug handlers to kubelet server"
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: I0919 23:13:58.147639    3130 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: I0919 23:13:58.148016    3130 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: E0919 23:13:58.148212    3130 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: I0919 23:13:58.148910    3130 volume_manager.go:313] "Starting Kubelet Volume Manager"
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: I0919 23:13:58.149859    3130 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: I0919 23:13:58.152188    3130 reconciler.go:29] "Reconciler: start to sync state"
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: E0919 23:13:58.152319    3130 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"embed-certs-403962\" not found"
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: I0919 23:13:58.153610    3130 factory.go:223] Registration of the systemd container factory successfully
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: I0919 23:13:58.153858    3130 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: I0919 23:13:58.158816    3130 factory.go:223] Registration of the containerd container factory successfully
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: E0919 23:13:58.158852    3130 manager.go:294] Registration of the raw container factory failed: inotify_init: too many open files
	Sep 19 23:13:58 embed-certs-403962 kubelet[3130]: E0919 23:13:58.158873    3130 kubelet.go:1686] "Failed to start cAdvisor" err="inotify_init: too many open files"
	Sep 19 23:13:58 embed-certs-403962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Sep 19 23:13:58 embed-certs-403962 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 19 23:13:58 embed-certs-403962 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
	Sep 19 23:13:58 embed-certs-403962 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 19 23:13:58 embed-certs-403962 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [47aa58544505ade6f4c115edc0098bf5f68e4a216f0c77407e15d893f7455d61] <==
	2025/09/19 23:13:14 Starting overwatch
	2025/09/19 23:13:14 Using namespace: kubernetes-dashboard
	2025/09/19 23:13:14 Using in-cluster config to connect to apiserver
	2025/09/19 23:13:14 Using secret token for csrf signing
	2025/09/19 23:13:14 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 23:13:14 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/19 23:13:14 Successful initial request to the apiserver, version: v1.34.0
	2025/09/19 23:13:14 Generating JWE encryption key
	2025/09/19 23:13:14 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/19 23:13:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/19 23:13:14 Initializing JWE encryption key from synchronized object
	2025/09/19 23:13:14 Creating in-cluster Sidecar client
	2025/09/19 23:13:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:13:14 Serving insecurely on HTTP port: 9090
	2025/09/19 23:13:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4563e410dc0f252652afbc6f753a6803481cab963203e75bf937ef3a03215571] <==
	I0919 23:13:46.748476       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 23:13:46.756538       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 23:13:46.756586       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0919 23:13:46.759328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:50.214902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:55.561544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ea70f76b17b6e8157ddbc228b4f91d0b0061b96d5691b7517b5b27a70e7700f0] <==
	I0919 23:13:02.362002       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 23:13:32.364694       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-403962 -n embed-certs-403962
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-403962 -n embed-certs-403962: exit status 2 (369.276208ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-403962 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-g24nt
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-403962 describe pod metrics-server-746fcd58dc-g24nt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-403962 describe pod metrics-server-746fcd58dc-g24nt: exit status 1 (78.997745ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-g24nt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-403962 describe pod metrics-server-746fcd58dc-g24nt: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-403962
helpers_test.go:243: (dbg) docker inspect embed-certs-403962:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a63af2c8f6f3d51a281ad094e87f99313b1ffdb287d036454c3eac2cd2773ece",
	        "Created": "2025-09-19T23:10:55.400103893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295421,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T23:12:49.952408367Z",
	            "FinishedAt": "2025-09-19T23:12:48.982627779Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/a63af2c8f6f3d51a281ad094e87f99313b1ffdb287d036454c3eac2cd2773ece/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a63af2c8f6f3d51a281ad094e87f99313b1ffdb287d036454c3eac2cd2773ece/hostname",
	        "HostsPath": "/var/lib/docker/containers/a63af2c8f6f3d51a281ad094e87f99313b1ffdb287d036454c3eac2cd2773ece/hosts",
	        "LogPath": "/var/lib/docker/containers/a63af2c8f6f3d51a281ad094e87f99313b1ffdb287d036454c3eac2cd2773ece/a63af2c8f6f3d51a281ad094e87f99313b1ffdb287d036454c3eac2cd2773ece-json.log",
	        "Name": "/embed-certs-403962",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-403962:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-403962",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a63af2c8f6f3d51a281ad094e87f99313b1ffdb287d036454c3eac2cd2773ece",
	                "LowerDir": "/var/lib/docker/overlay2/3a88e7826b7bee87d0fa55c7d6eb85cf6dd27425a2fda0f19e201ef730e85372-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a88e7826b7bee87d0fa55c7d6eb85cf6dd27425a2fda0f19e201ef730e85372/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a88e7826b7bee87d0fa55c7d6eb85cf6dd27425a2fda0f19e201ef730e85372/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a88e7826b7bee87d0fa55c7d6eb85cf6dd27425a2fda0f19e201ef730e85372/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-403962",
	                "Source": "/var/lib/docker/volumes/embed-certs-403962/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-403962",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-403962",
	                "name.minikube.sigs.k8s.io": "embed-certs-403962",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "43af2aa887632a3b88811a0c92a1eb3fc6e55ea6ead5b7bd04d9d10aa51f9ba8",
	            "SandboxKey": "/var/run/docker/netns/43af2aa88763",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-403962": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:eb:e4:f0:48:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eeb244b5b4d931aeb6e8ce39276a990d3a3ab31cb92cb0ad8df9ecee9db3b477",
	                    "EndpointID": "1d01e88f503576060d50fb72d8e5f51c72f1eaebdc6f82076c33e7fc88d3ef99",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-403962",
	                        "a63af2c8f6f3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-403962 -n embed-certs-403962
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-403962 -n embed-certs-403962: exit status 2 (403.029538ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-403962 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-403962 logs -n 25: (2.049361758s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ stop    │ -p no-preload-364197 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:11 UTC │ 19 Sep 25 23:12 UTC │
	│ addons  │ enable dashboard -p no-preload-364197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ start   │ -p no-preload-364197 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-403962 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ stop    │ -p embed-certs-403962 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ image   │ old-k8s-version-757990 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ pause   │ -p old-k8s-version-757990 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ unpause │ -p old-k8s-version-757990 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ delete  │ -p old-k8s-version-757990                                                                                                                                                                                                                           │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ delete  │ -p old-k8s-version-757990                                                                                                                                                                                                                           │ old-k8s-version-757990       │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ delete  │ -p disable-driver-mounts-606373                                                                                                                                                                                                                     │ disable-driver-mounts-606373 │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ start   │ -p default-k8s-diff-port-149888 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-149888 │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-403962 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ start   │ -p embed-certs-403962 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:13 UTC │
	│ start   │ -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-430859    │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-430859    │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ delete  │ -p kubernetes-upgrade-430859                                                                                                                                                                                                                        │ kubernetes-upgrade-430859    │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ start   │ -p newest-cni-312465 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-312465            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │                     │
	│ image   │ no-preload-364197 image list --format=json                                                                                                                                                                                                          │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ pause   │ -p no-preload-364197 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ unpause │ -p no-preload-364197 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ image   │ embed-certs-403962 image list --format=json                                                                                                                                                                                                         │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ pause   │ -p embed-certs-403962 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ unpause │ -p embed-certs-403962 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-403962           │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ delete  │ -p no-preload-364197                                                                                                                                                                                                                                │ no-preload-364197            │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:13:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:13:27.238593  304826 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:13:27.238920  304826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:13:27.238933  304826 out.go:374] Setting ErrFile to fd 2...
	I0919 23:13:27.238939  304826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:13:27.239301  304826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 23:13:27.240254  304826 out.go:368] Setting JSON to false
	I0919 23:13:27.242293  304826 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6951,"bootTime":1758316656,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:13:27.242391  304826 start.go:140] virtualization: kvm guest
	I0919 23:13:27.245079  304826 out.go:179] * [newest-cni-312465] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:13:27.247014  304826 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:13:27.247038  304826 notify.go:220] Checking for updates...
	I0919 23:13:27.250017  304826 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:13:27.251473  304826 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:13:27.253044  304826 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 23:13:27.254720  304826 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:13:27.256145  304826 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:13:27.258280  304826 config.go:182] Loaded profile config "default-k8s-diff-port-149888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:27.258431  304826 config.go:182] Loaded profile config "embed-certs-403962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:27.258597  304826 config.go:182] Loaded profile config "no-preload-364197": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:27.258738  304826 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:13:27.288883  304826 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:13:27.288975  304826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:13:27.365354  304826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 23:13:27.353196914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:13:27.365506  304826 docker.go:318] overlay module found
	I0919 23:13:27.367763  304826 out.go:179] * Using the docker driver based on user configuration
	I0919 23:13:27.369311  304826 start.go:304] selected driver: docker
	I0919 23:13:27.369334  304826 start.go:918] validating driver "docker" against <nil>
	I0919 23:13:27.369348  304826 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:13:27.370111  304826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:13:27.453927  304826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 23:13:27.442609844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:13:27.454140  304826 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W0919 23:13:27.454193  304826 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0919 23:13:27.454507  304826 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0919 23:13:27.457066  304826 out.go:179] * Using Docker driver with root privileges
	I0919 23:13:27.458665  304826 cni.go:84] Creating CNI manager for ""
	I0919 23:13:27.458745  304826 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:13:27.458755  304826 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 23:13:27.458835  304826 start.go:348] cluster config:
	{Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:13:27.460214  304826 out.go:179] * Starting "newest-cni-312465" primary control-plane node in "newest-cni-312465" cluster
	I0919 23:13:27.461705  304826 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 23:13:27.463479  304826 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:13:27.464969  304826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:13:27.465036  304826 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 23:13:27.465066  304826 cache.go:58] Caching tarball of preloaded images
	I0919 23:13:27.465145  304826 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:13:27.465211  304826 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 23:13:27.465224  304826 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 23:13:27.465373  304826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/config.json ...
	I0919 23:13:27.465402  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/config.json: {Name:mkbe0b2096af0dfcb672d8d5ff02d95192e51311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:27.491881  304826 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:13:27.491906  304826 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:13:27.491929  304826 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:13:27.491965  304826 start.go:360] acquireMachinesLock for newest-cni-312465: {Name:mkdaed0f91b48ccb0806887f4c48e7b6207e9286 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:13:27.492089  304826 start.go:364] duration metric: took 98.144µs to acquireMachinesLock for "newest-cni-312465"
	I0919 23:13:27.492120  304826 start.go:93] Provisioning new machine with config: &{Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:13:27.492213  304826 start.go:125] createHost starting for "" (driver="docker")
	I0919 23:13:25.986611  294587 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 22.501936199s
	I0919 23:13:25.991147  294587 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:13:25.991278  294587 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I0919 23:13:25.991386  294587 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:13:25.991522  294587 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W0919 23:13:25.316055  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:27.322716  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:27.416884  286555 pod_ready.go:104] pod "coredns-66bc5c9577-xg99k" is not "Ready", error: <nil>
	W0919 23:13:29.942623  286555 pod_ready.go:104] pod "coredns-66bc5c9577-xg99k" is not "Ready", error: <nil>
	I0919 23:13:27.494730  304826 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 23:13:27.494955  304826 start.go:159] libmachine.API.Create for "newest-cni-312465" (driver="docker")
	I0919 23:13:27.494995  304826 client.go:168] LocalClient.Create starting
	I0919 23:13:27.495095  304826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 23:13:27.495131  304826 main.go:141] libmachine: Decoding PEM data...
	I0919 23:13:27.495171  304826 main.go:141] libmachine: Parsing certificate...
	I0919 23:13:27.495239  304826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 23:13:27.495270  304826 main.go:141] libmachine: Decoding PEM data...
	I0919 23:13:27.495286  304826 main.go:141] libmachine: Parsing certificate...
	I0919 23:13:27.495751  304826 cli_runner.go:164] Run: docker network inspect newest-cni-312465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 23:13:27.519239  304826 cli_runner.go:211] docker network inspect newest-cni-312465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 23:13:27.519336  304826 network_create.go:284] running [docker network inspect newest-cni-312465] to gather additional debugging logs...
	I0919 23:13:27.519357  304826 cli_runner.go:164] Run: docker network inspect newest-cni-312465
	W0919 23:13:27.542030  304826 cli_runner.go:211] docker network inspect newest-cni-312465 returned with exit code 1
	I0919 23:13:27.542062  304826 network_create.go:287] error running [docker network inspect newest-cni-312465]: docker network inspect newest-cni-312465: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-312465 not found
	I0919 23:13:27.542075  304826 network_create.go:289] output of [docker network inspect newest-cni-312465]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-312465 not found
	
	** /stderr **
	I0919 23:13:27.542219  304826 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:13:27.573077  304826 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-465af21e2d8d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:b5:e0:20:10:48} reservation:<nil>}
	I0919 23:13:27.574029  304826 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd9dd989b948 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:ff:32:51:a1:28} reservation:<nil>}
	I0919 23:13:27.575058  304826 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d5cd7c460be9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:d8:0f:d3:49:b9} reservation:<nil>}
	I0919 23:13:27.576219  304826 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-eeb244b5b4d9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:19:45:7a:f8:43} reservation:<nil>}
	I0919 23:13:27.577101  304826 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-76962f0867a9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ee:d8:43:3c:3c:e2} reservation:<nil>}
	I0919 23:13:27.578259  304826 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cf1dc0}
	I0919 23:13:27.578290  304826 network_create.go:124] attempt to create docker network newest-cni-312465 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0919 23:13:27.578338  304826 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-312465 newest-cni-312465
	I0919 23:13:27.664074  304826 network_create.go:108] docker network newest-cni-312465 192.168.94.0/24 created
	I0919 23:13:27.664108  304826 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-312465" container
	I0919 23:13:27.664204  304826 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 23:13:27.686848  304826 cli_runner.go:164] Run: docker volume create newest-cni-312465 --label name.minikube.sigs.k8s.io=newest-cni-312465 --label created_by.minikube.sigs.k8s.io=true
	I0919 23:13:27.711517  304826 oci.go:103] Successfully created a docker volume newest-cni-312465
	I0919 23:13:27.711624  304826 cli_runner.go:164] Run: docker run --rm --name newest-cni-312465-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-312465 --entrypoint /usr/bin/test -v newest-cni-312465:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 23:13:28.191316  304826 oci.go:107] Successfully prepared a docker volume newest-cni-312465
	I0919 23:13:28.191366  304826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:13:28.191389  304826 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 23:13:28.191481  304826 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-312465:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 23:13:32.076573  304826 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-312465:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.885033462s)
	I0919 23:13:32.076612  304826 kic.go:203] duration metric: took 3.885218568s to extract preloaded images to volume ...
	W0919 23:13:32.076710  304826 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 23:13:32.076743  304826 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 23:13:32.076794  304826 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 23:13:32.149761  304826 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-312465 --name newest-cni-312465 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-312465 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-312465 --network newest-cni-312465 --ip 192.168.94.2 --volume newest-cni-312465:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 23:13:28.139399  294587 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.148131492s
	I0919 23:13:28.449976  294587 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.458741458s
	I0919 23:13:32.493086  294587 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.501778199s
	I0919 23:13:32.510785  294587 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:13:32.524242  294587 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:13:32.539521  294587 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:13:32.539729  294587 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-149888 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:13:32.551224  294587 kubeadm.go:310] [bootstrap-token] Using token: n81jvw.nat4ajoeag176u3n
	I0919 23:13:32.553385  294587 out.go:252]   - Configuring RBAC rules ...
	I0919 23:13:32.553522  294587 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:13:32.557811  294587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:13:32.567024  294587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:13:32.570531  294587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:13:32.576653  294587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:13:32.580237  294587 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:13:32.901145  294587 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:13:33.324739  294587 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:13:33.900632  294587 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:13:33.901573  294587 kubeadm.go:310] 
	I0919 23:13:33.901667  294587 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:13:33.901677  294587 kubeadm.go:310] 
	I0919 23:13:33.901751  294587 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:13:33.901758  294587 kubeadm.go:310] 
	I0919 23:13:33.901777  294587 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:13:33.901831  294587 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:13:33.901895  294587 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:13:33.901902  294587 kubeadm.go:310] 
	I0919 23:13:33.901944  294587 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:13:33.901974  294587 kubeadm.go:310] 
	I0919 23:13:33.902054  294587 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:13:33.902064  294587 kubeadm.go:310] 
	I0919 23:13:33.902143  294587 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:13:33.902266  294587 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:13:33.902331  294587 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:13:33.902339  294587 kubeadm.go:310] 
	I0919 23:13:33.902406  294587 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:13:33.902479  294587 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:13:33.902485  294587 kubeadm.go:310] 
	I0919 23:13:33.902551  294587 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token n81jvw.nat4ajoeag176u3n \
	I0919 23:13:33.902635  294587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 \
	I0919 23:13:33.902655  294587 kubeadm.go:310] 	--control-plane 
	I0919 23:13:33.902661  294587 kubeadm.go:310] 
	I0919 23:13:33.902730  294587 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:13:33.902737  294587 kubeadm.go:310] 
	I0919 23:13:33.902801  294587 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token n81jvw.nat4ajoeag176u3n \
	I0919 23:13:33.902883  294587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 
	I0919 23:13:33.906239  294587 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:13:33.906372  294587 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:13:33.906402  294587 cni.go:84] Creating CNI manager for ""
	I0919 23:13:33.906416  294587 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:13:33.908216  294587 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W0919 23:13:29.819116  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:31.826948  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:34.316941  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	I0919 23:13:32.476430  304826 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Running}}
	I0919 23:13:32.500104  304826 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:13:32.523104  304826 cli_runner.go:164] Run: docker exec newest-cni-312465 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:13:32.578263  304826 oci.go:144] the created container "newest-cni-312465" has a running status.
	I0919 23:13:32.578295  304826 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa...
	I0919 23:13:32.976039  304826 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:13:33.009077  304826 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:13:33.031547  304826 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:13:33.031565  304826 kic_runner.go:114] Args: [docker exec --privileged newest-cni-312465 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:13:33.092603  304826 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:13:33.115283  304826 machine.go:93] provisionDockerMachine start ...
	I0919 23:13:33.115380  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:33.139784  304826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:13:33.140058  304826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I0919 23:13:33.140073  304826 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:13:33.290427  304826 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-312465
	
	I0919 23:13:33.290458  304826 ubuntu.go:182] provisioning hostname "newest-cni-312465"
	I0919 23:13:33.290507  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:33.316275  304826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:13:33.316511  304826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I0919 23:13:33.316526  304826 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-312465 && echo "newest-cni-312465" | sudo tee /etc/hostname
	I0919 23:13:33.472768  304826 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-312465
	
	I0919 23:13:33.472864  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:33.494111  304826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:13:33.494398  304826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I0919 23:13:33.494430  304826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-312465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-312465/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-312465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:13:33.635421  304826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:13:33.635451  304826 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 23:13:33.635494  304826 ubuntu.go:190] setting up certificates
	I0919 23:13:33.635517  304826 provision.go:84] configureAuth start
	I0919 23:13:33.635574  304826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:13:33.655878  304826 provision.go:143] copyHostCerts
	I0919 23:13:33.655961  304826 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 23:13:33.655977  304826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 23:13:33.656058  304826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 23:13:33.656241  304826 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 23:13:33.656255  304826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 23:13:33.656304  304826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 23:13:33.656405  304826 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 23:13:33.656415  304826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 23:13:33.656457  304826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 23:13:33.656554  304826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.newest-cni-312465 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-312465]
	I0919 23:13:34.255292  304826 provision.go:177] copyRemoteCerts
	I0919 23:13:34.255368  304826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:13:34.255413  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.284316  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:34.387988  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 23:13:34.419504  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0919 23:13:34.448496  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:13:34.475661  304826 provision.go:87] duration metric: took 840.126723ms to configureAuth
	I0919 23:13:34.475694  304826 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:13:34.475872  304826 config.go:182] Loaded profile config "newest-cni-312465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:34.475881  304826 machine.go:96] duration metric: took 1.360576611s to provisionDockerMachine
	I0919 23:13:34.475891  304826 client.go:171] duration metric: took 6.980885128s to LocalClient.Create
	I0919 23:13:34.475913  304826 start.go:167] duration metric: took 6.980958258s to libmachine.API.Create "newest-cni-312465"
	I0919 23:13:34.475926  304826 start.go:293] postStartSetup for "newest-cni-312465" (driver="docker")
	I0919 23:13:34.475937  304826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:13:34.475995  304826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:13:34.476029  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.496668  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:34.598095  304826 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:13:34.602045  304826 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:13:34.602091  304826 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:13:34.602104  304826 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:13:34.602111  304826 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:13:34.602121  304826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 23:13:34.602190  304826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 23:13:34.602281  304826 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 23:13:34.602369  304826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:13:34.612660  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:13:34.643262  304826 start.go:296] duration metric: took 167.32169ms for postStartSetup
	I0919 23:13:34.643684  304826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:13:34.663272  304826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/config.json ...
	I0919 23:13:34.663583  304826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:13:34.663633  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.683961  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:34.779205  304826 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:13:34.785070  304826 start.go:128] duration metric: took 7.292838847s to createHost
	I0919 23:13:34.785099  304826 start.go:83] releasing machines lock for "newest-cni-312465", held for 7.292995602s
	I0919 23:13:34.785189  304826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:13:34.807464  304826 ssh_runner.go:195] Run: cat /version.json
	I0919 23:13:34.807503  304826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:13:34.807575  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.807583  304826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:13:34.829219  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:34.829637  304826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:13:35.008352  304826 ssh_runner.go:195] Run: systemctl --version
	I0919 23:13:35.013908  304826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:13:35.019269  304826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:13:35.055596  304826 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:13:35.055680  304826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:13:35.090798  304826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:13:35.090825  304826 start.go:495] detecting cgroup driver to use...
	I0919 23:13:35.090862  304826 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:13:35.090925  304826 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 23:13:35.106670  304826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:13:35.120167  304826 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:13:35.120229  304826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:13:35.136229  304826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:13:35.152080  304826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:13:35.229432  304826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:13:35.314675  304826 docker.go:234] disabling docker service ...
	I0919 23:13:35.314746  304826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:13:35.336969  304826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:13:35.352061  304826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:13:35.433841  304826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:13:35.511892  304826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:13:35.525179  304826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:13:35.544848  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:13:35.558556  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:13:35.570787  304826 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:13:35.570874  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:13:35.583714  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:13:35.596563  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:13:35.608811  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:13:35.621274  304826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:13:35.632671  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:13:35.646560  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:13:35.659112  304826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:13:35.671491  304826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:13:35.681987  304826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:13:35.693319  304826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:13:35.765943  304826 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:13:35.900474  304826 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 23:13:35.900553  304826 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 23:13:35.904775  304826 start.go:563] Will wait 60s for crictl version
	I0919 23:13:35.904838  304826 ssh_runner.go:195] Run: which crictl
	I0919 23:13:35.908969  304826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:13:35.948499  304826 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 23:13:35.948718  304826 ssh_runner.go:195] Run: containerd --version
	I0919 23:13:35.976417  304826 ssh_runner.go:195] Run: containerd --version
	I0919 23:13:36.005950  304826 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 23:13:36.007659  304826 cli_runner.go:164] Run: docker network inspect newest-cni-312465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:13:36.028772  304826 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0919 23:13:36.033878  304826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:13:36.053802  304826 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W0919 23:13:31.971038  286555 pod_ready.go:104] pod "coredns-66bc5c9577-xg99k" is not "Ready", error: <nil>
	W0919 23:13:34.412827  286555 pod_ready.go:104] pod "coredns-66bc5c9577-xg99k" is not "Ready", error: <nil>
	I0919 23:13:36.412824  286555 pod_ready.go:94] pod "coredns-66bc5c9577-xg99k" is "Ready"
	I0919 23:13:36.412859  286555 pod_ready.go:86] duration metric: took 1m14.00590752s for pod "coredns-66bc5c9577-xg99k" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.415705  286555 pod_ready.go:83] waiting for pod "etcd-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.420550  286555 pod_ready.go:94] pod "etcd-no-preload-364197" is "Ready"
	I0919 23:13:36.420580  286555 pod_ready.go:86] duration metric: took 4.848977ms for pod "etcd-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.423284  286555 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.428673  286555 pod_ready.go:94] pod "kube-apiserver-no-preload-364197" is "Ready"
	I0919 23:13:36.428703  286555 pod_ready.go:86] duration metric: took 5.394829ms for pod "kube-apiserver-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.431305  286555 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.610936  286555 pod_ready.go:94] pod "kube-controller-manager-no-preload-364197" is "Ready"
	I0919 23:13:36.610963  286555 pod_ready.go:86] duration metric: took 179.625984ms for pod "kube-controller-manager-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:36.056701  304826 kubeadm.go:875] updating cluster {Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:13:36.056877  304826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:13:36.057030  304826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:13:36.099591  304826 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:13:36.099615  304826 containerd.go:534] Images already preloaded, skipping extraction
	I0919 23:13:36.099675  304826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:13:36.143373  304826 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:13:36.143413  304826 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:13:36.143421  304826 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 containerd true true} ...
	I0919 23:13:36.143508  304826 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-312465 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:13:36.143562  304826 ssh_runner.go:195] Run: sudo crictl info
	I0919 23:13:36.185797  304826 cni.go:84] Creating CNI manager for ""
	I0919 23:13:36.185828  304826 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:13:36.185843  304826 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0919 23:13:36.185875  304826 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-312465 NodeName:newest-cni-312465 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:13:36.186182  304826 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-312465"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:13:36.186269  304826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:13:36.198096  304826 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:13:36.198546  304826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:13:36.214736  304826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0919 23:13:36.244125  304826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:13:36.270995  304826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I0919 23:13:36.295177  304826 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:13:36.299365  304826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:13:36.313119  304826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:13:36.396378  304826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:13:36.418497  304826 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465 for IP: 192.168.94.2
	I0919 23:13:36.418522  304826 certs.go:194] generating shared ca certs ...
	I0919 23:13:36.418544  304826 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.418705  304826 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 23:13:36.418761  304826 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 23:13:36.418775  304826 certs.go:256] generating profile certs ...
	I0919 23:13:36.418843  304826 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.key
	I0919 23:13:36.418860  304826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.crt with IP's: []
	I0919 23:13:36.531217  304826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.crt ...
	I0919 23:13:36.531247  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.crt: {Name:mk2dead7c7dd4abba877b10a34bd54e0741b0c51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.531436  304826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.key ...
	I0919 23:13:36.531449  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.key: {Name:mkb2dce7d200188d9475ab5211c83bb5dd871bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.531531  304826 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb
	I0919 23:13:36.531547  304826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt.41c88afb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0919 23:13:36.764681  304826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt.41c88afb ...
	I0919 23:13:36.764719  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt.41c88afb: {Name:mkd78eb5b6eba4ac120b530170a9a115208fec96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.764949  304826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb ...
	I0919 23:13:36.764969  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb: {Name:mk23f979dad453c3780b4813b8fc576ea9e94f4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:36.765077  304826 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt.41c88afb -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt
	I0919 23:13:36.765208  304826 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key
	I0919 23:13:36.765299  304826 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key
	I0919 23:13:36.765323  304826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt with IP's: []
	I0919 23:13:36.811680  286555 pod_ready.go:83] waiting for pod "kube-proxy-t4j4z" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:37.211272  286555 pod_ready.go:94] pod "kube-proxy-t4j4z" is "Ready"
	I0919 23:13:37.211303  286555 pod_ready.go:86] duration metric: took 399.591313ms for pod "kube-proxy-t4j4z" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:37.410092  286555 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:37.810858  286555 pod_ready.go:94] pod "kube-scheduler-no-preload-364197" is "Ready"
	I0919 23:13:37.810890  286555 pod_ready.go:86] duration metric: took 400.769138ms for pod "kube-scheduler-no-preload-364197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:37.810907  286555 pod_ready.go:40] duration metric: took 1m15.409243632s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:13:37.871652  286555 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:13:37.873712  286555 out.go:179] * Done! kubectl is now configured to use "no-preload-364197" cluster and "default" namespace by default
	I0919 23:13:33.909671  294587 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 23:13:33.914917  294587 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 23:13:33.914945  294587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 23:13:33.936898  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 23:13:34.176650  294587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:13:34.176752  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:34.176780  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-149888 minikube.k8s.io/updated_at=2025_09_19T23_13_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=default-k8s-diff-port-149888 minikube.k8s.io/primary=true
	I0919 23:13:34.185919  294587 ops.go:34] apiserver oom_adj: -16
	I0919 23:13:34.285582  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:34.786386  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:35.286435  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:35.786591  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:36.286349  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:36.786365  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:37.286088  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:37.786249  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:38.286182  294587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:13:38.381035  294587 kubeadm.go:1105] duration metric: took 4.204361703s to wait for elevateKubeSystemPrivileges
	I0919 23:13:38.381076  294587 kubeadm.go:394] duration metric: took 40.106256802s to StartCluster
	I0919 23:13:38.381101  294587 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:38.381208  294587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:13:38.383043  294587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:38.383384  294587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:13:38.383418  294587 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:13:38.383497  294587 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:13:38.383584  294587 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-149888"
	I0919 23:13:38.383599  294587 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-149888"
	I0919 23:13:38.383622  294587 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-149888"
	I0919 23:13:38.383623  294587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-149888"
	I0919 23:13:38.383638  294587 config.go:182] Loaded profile config "default-k8s-diff-port-149888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:13:38.383654  294587 host.go:66] Checking if "default-k8s-diff-port-149888" exists ...
	I0919 23:13:38.384100  294587 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:13:38.384352  294587 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:13:38.386876  294587 out.go:179] * Verifying Kubernetes components...
	I0919 23:13:38.392366  294587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:13:38.414274  294587 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:13:37.730859  304826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt ...
	I0919 23:13:37.730889  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt: {Name:mka643fd8f3814e682ac62f488ac921be438271e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:37.731102  304826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key ...
	I0919 23:13:37.731122  304826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key: {Name:mk1e0a6b750f125c5af55b66a1efb72f4537d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:13:37.731375  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 23:13:37.731416  304826 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 23:13:37.731424  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 23:13:37.731453  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:13:37.731475  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:13:37.731496  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 23:13:37.731531  304826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:13:37.732086  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:13:37.760205  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:13:37.788964  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:13:37.821273  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:13:37.854511  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 23:13:37.886302  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:13:37.919585  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:13:37.949973  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:13:37.982330  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:13:38.018976  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 23:13:38.049608  304826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 23:13:38.081886  304826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:13:38.109125  304826 ssh_runner.go:195] Run: openssl version
	I0919 23:13:38.118278  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 23:13:38.133041  304826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 23:13:38.138504  304826 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 23:13:38.138570  304826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 23:13:38.147725  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 23:13:38.160519  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 23:13:38.174178  304826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 23:13:38.179241  304826 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 23:13:38.179303  304826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 23:13:38.188486  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:13:38.203742  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:13:38.216299  304826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:13:38.221016  304826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:13:38.221087  304826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:13:38.229132  304826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:13:38.242362  304826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:13:38.247181  304826 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:13:38.247247  304826 kubeadm.go:392] StartCluster: {Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:13:38.247335  304826 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 23:13:38.247392  304826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:13:38.289664  304826 cri.go:89] found id: ""
	I0919 23:13:38.289745  304826 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:13:38.300688  304826 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:13:38.314602  304826 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:13:38.314666  304826 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:13:38.328513  304826 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:13:38.328532  304826 kubeadm.go:157] found existing configuration files:
	
	I0919 23:13:38.328573  304826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:13:38.340801  304826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:13:38.340902  304826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:13:38.354142  304826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:13:38.367990  304826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:13:38.368067  304826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:13:38.379710  304826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:13:38.393587  304826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:13:38.393654  304826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:13:38.406457  304826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:13:38.423007  304826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:13:38.423071  304826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:13:38.441889  304826 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:13:38.509349  304826 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:13:38.509425  304826 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:13:38.535354  304826 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:13:38.535436  304826 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:13:38.535487  304826 kubeadm.go:310] OS: Linux
	I0919 23:13:38.535547  304826 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:13:38.535585  304826 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:13:38.535633  304826 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:13:38.535689  304826 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:13:38.535753  304826 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:13:38.535813  304826 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:13:38.535850  304826 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:13:38.535885  304826 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:13:38.621848  304826 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:13:38.622065  304826 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:13:38.622186  304826 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:13:38.630978  304826 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:13:38.415345  294587 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:13:38.415366  294587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:13:38.415418  294587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-149888
	I0919 23:13:38.415735  294587 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-149888"
	I0919 23:13:38.415780  294587 host.go:66] Checking if "default-k8s-diff-port-149888" exists ...
	I0919 23:13:38.416297  294587 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:13:38.445969  294587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/default-k8s-diff-port-149888/id_rsa Username:docker}
	I0919 23:13:38.447208  294587 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:13:38.447231  294587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:13:38.447297  294587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-149888
	I0919 23:13:38.480457  294587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/default-k8s-diff-port-149888/id_rsa Username:docker}
	I0919 23:13:38.540300  294587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:13:38.557619  294587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:13:38.594341  294587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:13:38.630764  294587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:13:38.799085  294587 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0919 23:13:38.800978  294587 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-149888" to be "Ready" ...
	I0919 23:13:38.812605  294587 node_ready.go:49] node "default-k8s-diff-port-149888" is "Ready"
	I0919 23:13:38.812642  294587 node_ready.go:38] duration metric: took 11.622008ms for node "default-k8s-diff-port-149888" to be "Ready" ...
	I0919 23:13:38.812666  294587 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:13:38.812750  294587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:13:39.036443  294587 api_server.go:72] duration metric: took 652.97537ms to wait for apiserver process to appear ...
	I0919 23:13:39.036471  294587 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:13:39.036490  294587 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0919 23:13:39.043372  294587 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0919 23:13:39.047190  294587 api_server.go:141] control plane version: v1.34.0
	I0919 23:13:39.047226  294587 api_server.go:131] duration metric: took 10.747839ms to wait for apiserver health ...
	I0919 23:13:39.047237  294587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:13:39.049788  294587 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W0919 23:13:36.317685  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	W0919 23:13:38.318647  295194 pod_ready.go:104] pod "coredns-66bc5c9577-t6v26" is not "Ready", error: <nil>
	I0919 23:13:39.819987  295194 pod_ready.go:94] pod "coredns-66bc5c9577-t6v26" is "Ready"
	I0919 23:13:39.820015  295194 pod_ready.go:86] duration metric: took 37.509771492s for pod "coredns-66bc5c9577-t6v26" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.822985  295194 pod_ready.go:83] waiting for pod "etcd-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.827553  295194 pod_ready.go:94] pod "etcd-embed-certs-403962" is "Ready"
	I0919 23:13:39.827574  295194 pod_ready.go:86] duration metric: took 4.567201ms for pod "etcd-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.829949  295194 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.834015  295194 pod_ready.go:94] pod "kube-apiserver-embed-certs-403962" is "Ready"
	I0919 23:13:39.834041  295194 pod_ready.go:86] duration metric: took 4.068136ms for pod "kube-apiserver-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:39.836103  295194 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:40.014492  295194 pod_ready.go:94] pod "kube-controller-manager-embed-certs-403962" is "Ready"
	I0919 23:13:40.014519  295194 pod_ready.go:86] duration metric: took 178.389529ms for pod "kube-controller-manager-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:40.214694  295194 pod_ready.go:83] waiting for pod "kube-proxy-5tf2s" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:40.614193  295194 pod_ready.go:94] pod "kube-proxy-5tf2s" is "Ready"
	I0919 23:13:40.614222  295194 pod_ready.go:86] duration metric: took 399.49287ms for pod "kube-proxy-5tf2s" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:40.814999  295194 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:41.214398  295194 pod_ready.go:94] pod "kube-scheduler-embed-certs-403962" is "Ready"
	I0919 23:13:41.214429  295194 pod_ready.go:86] duration metric: took 399.403485ms for pod "kube-scheduler-embed-certs-403962" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:13:41.214439  295194 pod_ready.go:40] duration metric: took 38.913620351s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:13:41.267599  295194 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:13:41.270700  295194 out.go:179] * Done! kubectl is now configured to use "embed-certs-403962" cluster and "default" namespace by default
	I0919 23:13:38.634403  304826 out.go:252]   - Generating certificates and keys ...
	I0919 23:13:38.634645  304826 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:13:38.634729  304826 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:13:38.733514  304826 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:13:39.062476  304826 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:13:39.133445  304826 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:13:39.439953  304826 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:13:39.872072  304826 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:13:39.872221  304826 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-312465] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0919 23:13:39.972922  304826 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:13:39.973129  304826 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-312465] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0919 23:13:40.957549  304826 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:13:41.144394  304826 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:13:41.426739  304826 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:13:41.426849  304826 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:13:41.554555  304826 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:13:41.608199  304826 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:13:41.645796  304826 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:13:41.778911  304826 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:13:41.900942  304826 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:13:41.901396  304826 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:13:41.905522  304826 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:13:41.907209  304826 out.go:252]   - Booting up control plane ...
	I0919 23:13:41.907335  304826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:13:41.907460  304826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:13:41.907982  304826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:13:41.919781  304826 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:13:41.919920  304826 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:13:41.926298  304826 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:13:41.926476  304826 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:13:41.926547  304826 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:13:42.017500  304826 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:13:42.017660  304826 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:13:39.052217  294587 addons.go:514] duration metric: took 668.711417ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 23:13:39.053005  294587 system_pods.go:59] 9 kube-system pods found
	I0919 23:13:39.053044  294587 system_pods.go:61] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.053057  294587 system_pods.go:61] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.053070  294587 system_pods.go:61] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:13:39.053085  294587 system_pods.go:61] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 23:13:39.053092  294587 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:39.053105  294587 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:39.053113  294587 system_pods.go:61] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:39.053135  294587 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:39.053144  294587 system_pods.go:61] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:39.053189  294587 system_pods.go:74] duration metric: took 5.910482ms to wait for pod list to return data ...
	I0919 23:13:39.053205  294587 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:13:39.055828  294587 default_sa.go:45] found service account: "default"
	I0919 23:13:39.055846  294587 default_sa.go:55] duration metric: took 2.635401ms for default service account to be created ...
	I0919 23:13:39.055855  294587 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:13:39.058754  294587 system_pods.go:86] 9 kube-system pods found
	I0919 23:13:39.058787  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.058797  294587 system_pods.go:89] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.058807  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:13:39.058821  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 23:13:39.058830  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:39.058841  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:39.058846  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:39.058852  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:39.058857  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:39.058878  294587 retry.go:31] will retry after 270.945985ms: missing components: kube-dns, kube-proxy
	I0919 23:13:39.304737  294587 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-149888" context rescaled to 1 replicas
	I0919 23:13:39.337213  294587 system_pods.go:86] 9 kube-system pods found
	I0919 23:13:39.337253  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.337265  294587 system_pods.go:89] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.337271  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:39.337278  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:39.337284  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:39.337290  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:39.337298  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:39.337305  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:39.337314  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:39.337335  294587 retry.go:31] will retry after 357.220825ms: missing components: kube-dns
	I0919 23:13:39.698915  294587 system_pods.go:86] 9 kube-system pods found
	I0919 23:13:39.698949  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.698958  294587 system_pods.go:89] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:39.698966  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:39.698975  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:39.698980  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:39.698987  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:39.698995  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:39.699002  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:39.699013  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:39.699035  294587 retry.go:31] will retry after 375.514546ms: missing components: kube-dns
	I0919 23:13:40.079067  294587 system_pods.go:86] 9 kube-system pods found
	I0919 23:13:40.079105  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:40.079117  294587 system_pods.go:89] "coredns-66bc5c9577-ttpd2" [bd648f41-de22-41eb-80bf-57cce99ce5f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:40.079125  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:40.079131  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:40.079136  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:40.079141  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:40.079148  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:40.079191  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:40.079199  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:13:40.079216  294587 retry.go:31] will retry after 558.632768ms: missing components: kube-dns
	I0919 23:13:40.643894  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:40.643930  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:40.643938  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:40.643947  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:40.643953  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:40.643960  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:40.643970  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:40.643983  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:40.643989  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:40.644010  294587 retry.go:31] will retry after 761.400913ms: missing components: kube-dns
	I0919 23:13:41.410199  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:41.410236  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:41.410250  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:41.410257  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:41.410263  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:41.410269  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:41.410277  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:41.410285  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:41.410291  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:41.410312  294587 retry.go:31] will retry after 629.477098ms: missing components: kube-dns
	I0919 23:13:42.043664  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:42.043705  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:42.043715  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:42.043724  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:42.043729  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:42.043739  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:42.043747  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:42.043753  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:42.043762  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:42.043778  294587 retry.go:31] will retry after 1.069085397s: missing components: kube-dns
	I0919 23:13:43.117253  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:43.117290  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:43.117297  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:43.117305  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:43.117308  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:43.117312  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:43.117318  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:43.117322  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:43.117326  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:43.117339  294587 retry.go:31] will retry after 1.031094562s: missing components: kube-dns
	I0919 23:13:44.153419  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:44.153454  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:44.153460  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:44.153467  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:44.153472  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:44.153475  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:44.153480  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:44.153484  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:44.153487  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:44.153499  294587 retry.go:31] will retry after 1.715155668s: missing components: kube-dns
	I0919 23:13:45.873736  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:45.873776  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:45.873786  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:45.873794  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:45.873800  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:45.873805  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:45.873820  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:45.873826  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:45.873832  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:45.873863  294587 retry.go:31] will retry after 2.128059142s: missing components: kube-dns
	I0919 23:13:48.006564  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:48.006602  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:48.006610  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:48.006618  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:48.006624  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:48.006630  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:48.006635  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:48.006640  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:48.006647  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:48.006662  294587 retry.go:31] will retry after 1.782367114s: missing components: kube-dns
	I0919 23:13:50.518700  304826 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 8.501106835s
	I0919 23:13:50.522818  304826 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:13:50.522974  304826 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I0919 23:13:50.523114  304826 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:13:50.523256  304826 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:13:49.793148  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:49.793210  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:49.793217  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:49.793223  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:49.793229  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:49.793232  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:49.793243  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:49.793246  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:49.793251  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:49.793265  294587 retry.go:31] will retry after 2.338572613s: missing components: kube-dns
	I0919 23:13:52.140344  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:52.140388  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:52.140397  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:52.140407  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:52.140413  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:52.140419  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:52.140428  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:52.140435  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:52.140442  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:52.140471  294587 retry.go:31] will retry after 3.086457646s: missing components: kube-dns
	I0919 23:13:52.884946  304826 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.362051829s
	I0919 23:13:53.462893  304826 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.939923299s
	I0919 23:13:55.526762  304826 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.001364253s
	I0919 23:13:55.539011  304826 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:13:55.554378  304826 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:13:55.568644  304826 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:13:55.568919  304826 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-312465 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:13:55.589739  304826 kubeadm.go:310] [bootstrap-token] Using token: jlnn4o.ezmdj0dkuh5aygdp
	I0919 23:13:55.597493  304826 out.go:252]   - Configuring RBAC rules ...
	I0919 23:13:55.597663  304826 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:13:55.605517  304826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:13:55.615421  304826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:13:55.619862  304826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:13:55.623882  304826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:13:55.627801  304826 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:13:55.932128  304826 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:13:56.356624  304826 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:13:56.933510  304826 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:13:56.934045  304826 kubeadm.go:310] 
	I0919 23:13:56.934263  304826 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:13:56.934299  304826 kubeadm.go:310] 
	I0919 23:13:56.934450  304826 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:13:56.934513  304826 kubeadm.go:310] 
	I0919 23:13:56.934545  304826 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:13:56.934630  304826 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:13:56.934686  304826 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:13:56.934691  304826 kubeadm.go:310] 
	I0919 23:13:56.934758  304826 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:13:56.934770  304826 kubeadm.go:310] 
	I0919 23:13:56.934825  304826 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:13:56.934831  304826 kubeadm.go:310] 
	I0919 23:13:56.934891  304826 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:13:56.934986  304826 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:13:56.935060  304826 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:13:56.935065  304826 kubeadm.go:310] 
	I0919 23:13:56.935176  304826 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:13:56.935268  304826 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:13:56.935275  304826 kubeadm.go:310] 
	I0919 23:13:56.935375  304826 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jlnn4o.ezmdj0dkuh5aygdp \
	I0919 23:13:56.935496  304826 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 \
	I0919 23:13:56.935523  304826 kubeadm.go:310] 	--control-plane 
	I0919 23:13:56.935529  304826 kubeadm.go:310] 
	I0919 23:13:56.941214  304826 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:13:56.941255  304826 kubeadm.go:310] 
	I0919 23:13:56.941369  304826 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jlnn4o.ezmdj0dkuh5aygdp \
	I0919 23:13:56.941535  304826 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 
	I0919 23:13:56.944009  304826 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:13:56.944144  304826 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:13:56.944225  304826 cni.go:84] Creating CNI manager for ""
	I0919 23:13:56.944236  304826 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:13:56.946333  304826 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 23:13:56.948244  304826 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 23:13:56.954968  304826 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 23:13:56.954992  304826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 23:13:56.981556  304826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 23:13:55.233827  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:13:55.233869  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:13:55.233879  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:13:55.233890  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:13:55.233895  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:13:55.233900  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:13:55.233909  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:13:55.233926  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:13:55.233932  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:13:55.233951  294587 retry.go:31] will retry after 4.458479777s: missing components: kube-dns
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	4563e410dc0f2       6e38f40d628db       14 seconds ago       Running             storage-provisioner         3                   a66acbb7b7fb7       storage-provisioner
	a290868582b05       523cad1a4df73       31 seconds ago       Exited              dashboard-metrics-scraper   2                   5cc7b3904cba5       dashboard-metrics-scraper-6ffb444bf9-cnlgb
	47aa58544505a       07655ddf2eebe       46 seconds ago       Running             kubernetes-dashboard        0                   e2968b557b4b7       kubernetes-dashboard-855c9754f9-9hzq9
	d85d77e0cb950       409467f978b4a       58 seconds ago       Running             kindnet-cni                 1                   dce54f2503eb1       kindnet-cfvvr
	34e7809edb448       56cc512116c8f       58 seconds ago       Running             busybox                     1                   454b71d72e776       busybox
	cc833990e602c       52546a367cc9e       58 seconds ago       Running             coredns                     1                   1a6db11ddddab       coredns-66bc5c9577-t6v26
	ea70f76b17b6e       6e38f40d628db       58 seconds ago       Exited              storage-provisioner         2                   a66acbb7b7fb7       storage-provisioner
	3a603aaa7a1bc       df0860106674d       58 seconds ago       Running             kube-proxy                  3                   4c0f4703a4e72       kube-proxy-5tf2s
	685fd68b08faf       a0af72f2ec6d6       About a minute ago   Running             kube-controller-manager     1                   e0f9c4a55d1c8       kube-controller-manager-embed-certs-403962
	483f16593a289       46169d968e920       About a minute ago   Running             kube-scheduler              1                   e656d3903d6c0       kube-scheduler-embed-certs-403962
	fff89acc2e74b       90550c43ad2bc       About a minute ago   Running             kube-apiserver              1                   adb7d9006058d       kube-apiserver-embed-certs-403962
	e01a7e7e7cd1e       5f1f5298c888d       About a minute ago   Running             etcd                        1                   f46f404ac9d28       etcd-embed-certs-403962
	56145aab088b8       56cc512116c8f       About a minute ago   Exited              busybox                     0                   c877c65b7d0e6       busybox
	5a6738588eda9       52546a367cc9e       About a minute ago   Exited              coredns                     0                   5dfea961e621a       coredns-66bc5c9577-t6v26
	c5049fc2e8ac9       df0860106674d       2 minutes ago        Exited              kube-proxy                  2                   53519fbdb5fc0       kube-proxy-5tf2s
	6044b48856573       409467f978b4a       2 minutes ago        Exited              kindnet-cni                 0                   8ce059f3c7b8d       kindnet-cfvvr
	432944df07afe       a0af72f2ec6d6       2 minutes ago        Exited              kube-controller-manager     0                   a58b63567c0d4       kube-controller-manager-embed-certs-403962
	3a9a8f6fc34ea       46169d968e920       2 minutes ago        Exited              kube-scheduler              0                   5a88c51511690       kube-scheduler-embed-certs-403962
	bfd145fe58ffd       90550c43ad2bc       2 minutes ago        Exited              kube-apiserver              0                   cef1a795a0d60       kube-apiserver-embed-certs-403962
	cf7db7dc6b4de       5f1f5298c888d       2 minutes ago        Exited              etcd                        0                   4126d76c28cb6       etcd-embed-certs-403962
	
	
	==> containerd <==
	Sep 19 23:13:46 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:46.662730046Z" level=info msg="CreateContainer within sandbox \"a66acbb7b7fb7d81cf63f6dfe0585062ae52aa8874150698912e1fe48bff0282\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:3,}"
	Sep 19 23:13:46 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:46.679231848Z" level=info msg="CreateContainer within sandbox \"a66acbb7b7fb7d81cf63f6dfe0585062ae52aa8874150698912e1fe48bff0282\" for &ContainerMetadata{Name:storage-provisioner,Attempt:3,} returns container id \"4563e410dc0f252652afbc6f753a6803481cab963203e75bf937ef3a03215571\""
	Sep 19 23:13:46 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:46.680132476Z" level=info msg="StartContainer for \"4563e410dc0f252652afbc6f753a6803481cab963203e75bf937ef3a03215571\""
	Sep 19 23:13:46 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:46.736304005Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 23:13:46 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:46.742271462Z" level=info msg="StartContainer for \"4563e410dc0f252652afbc6f753a6803481cab963203e75bf937ef3a03215571\" returns successfully"
	Sep 19 23:13:55 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:55.846110567Z" level=info msg="StopPodSandbox for \"44e2025708a7265d2c1d0ac8cd01bd2f50bb3ad118af8efc456bdbd4676d47e0\""
	Sep 19 23:13:55 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:55.846728703Z" level=info msg="TearDown network for sandbox \"44e2025708a7265d2c1d0ac8cd01bd2f50bb3ad118af8efc456bdbd4676d47e0\" successfully"
	Sep 19 23:13:55 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:55.846982825Z" level=info msg="StopPodSandbox for \"44e2025708a7265d2c1d0ac8cd01bd2f50bb3ad118af8efc456bdbd4676d47e0\" returns successfully"
	Sep 19 23:13:55 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:55.848635343Z" level=info msg="RemovePodSandbox for \"44e2025708a7265d2c1d0ac8cd01bd2f50bb3ad118af8efc456bdbd4676d47e0\""
	Sep 19 23:13:55 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:55.848687445Z" level=info msg="Forcibly stopping sandbox \"44e2025708a7265d2c1d0ac8cd01bd2f50bb3ad118af8efc456bdbd4676d47e0\""
	Sep 19 23:13:55 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:55.848794316Z" level=info msg="TearDown network for sandbox \"44e2025708a7265d2c1d0ac8cd01bd2f50bb3ad118af8efc456bdbd4676d47e0\" successfully"
	Sep 19 23:13:55 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:55.853518713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44e2025708a7265d2c1d0ac8cd01bd2f50bb3ad118af8efc456bdbd4676d47e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 19 23:13:55 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:55.853606013Z" level=info msg="RemovePodSandbox \"44e2025708a7265d2c1d0ac8cd01bd2f50bb3ad118af8efc456bdbd4676d47e0\" returns successfully"
	Sep 19 23:13:58 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:58.924279680Z" level=info msg="StopPodSandbox for \"b4f25539e2586d9d34bd16b02346eb24995d77e5511adca230490b8bd84e266d\""
	Sep 19 23:13:58 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:58.924433157Z" level=info msg="TearDown network for sandbox \"b4f25539e2586d9d34bd16b02346eb24995d77e5511adca230490b8bd84e266d\" successfully"
	Sep 19 23:13:58 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:58.924457689Z" level=info msg="StopPodSandbox for \"b4f25539e2586d9d34bd16b02346eb24995d77e5511adca230490b8bd84e266d\" returns successfully"
	Sep 19 23:13:59 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:59.666340033Z" level=info msg="StopPodSandbox for \"b4f25539e2586d9d34bd16b02346eb24995d77e5511adca230490b8bd84e266d\""
	Sep 19 23:13:59 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:59.666499116Z" level=info msg="TearDown network for sandbox \"b4f25539e2586d9d34bd16b02346eb24995d77e5511adca230490b8bd84e266d\" successfully"
	Sep 19 23:13:59 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:59.666519984Z" level=info msg="StopPodSandbox for \"b4f25539e2586d9d34bd16b02346eb24995d77e5511adca230490b8bd84e266d\" returns successfully"
	Sep 19 23:13:59 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:59.667976981Z" level=info msg="RemovePodSandbox for \"b4f25539e2586d9d34bd16b02346eb24995d77e5511adca230490b8bd84e266d\""
	Sep 19 23:13:59 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:59.668038716Z" level=info msg="Forcibly stopping sandbox \"b4f25539e2586d9d34bd16b02346eb24995d77e5511adca230490b8bd84e266d\""
	Sep 19 23:13:59 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:59.668207728Z" level=info msg="TearDown network for sandbox \"b4f25539e2586d9d34bd16b02346eb24995d77e5511adca230490b8bd84e266d\" successfully"
	Sep 19 23:13:59 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:59.674109492Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4f25539e2586d9d34bd16b02346eb24995d77e5511adca230490b8bd84e266d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 19 23:13:59 embed-certs-403962 containerd[476]: time="2025-09-19T23:13:59.674247150Z" level=info msg="RemovePodSandbox \"b4f25539e2586d9d34bd16b02346eb24995d77e5511adca230490b8bd84e266d\" returns successfully"
	Sep 19 23:14:00 embed-certs-403962 containerd[476]: time="2025-09-19T23:14:00.625146093Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	
	
	==> coredns [5a6738588eda9670758d2c95ddd575f0d3bbe663fccc84269735b439c58d2240] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55865 - 15726 "HINFO IN 8224844395446692356.8669349578143619784. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019682314s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cc833990e602ca5b705b8aa5ac46b56807fa0fadf23b708a7d23265bfeb92d8f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45444 - 21218 "HINFO IN 2590847814099361813.5910761736158485681. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020611972s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-403962
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-403962
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=embed-certs-403962
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_11_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:11:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-403962
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:14:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:14:00 +0000   Fri, 19 Sep 2025 23:11:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:14:00 +0000   Fri, 19 Sep 2025 23:11:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:14:00 +0000   Fri, 19 Sep 2025 23:11:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:14:00 +0000   Fri, 19 Sep 2025 23:11:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-403962
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 0835b9f66bce444bab3315337fb85fb5
	  System UUID:                01ab2205-6958-4b6a-b331-e4029a4f9b37
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-t6v26                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m43s
	  kube-system                 etcd-embed-certs-403962                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m49s
	  kube-system                 kindnet-cfvvr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m44s
	  kube-system                 kube-apiserver-embed-certs-403962             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m49s
	  kube-system                 kube-controller-manager-embed-certs-403962    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m49s
	  kube-system                 kube-proxy-5tf2s                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  kube-system                 kube-scheduler-embed-certs-403962             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m49s
	  kube-system                 metrics-server-746fcd58dc-g24nt               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         84s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-cnlgb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9hzq9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m24s                  kube-proxy       
	  Normal  Starting                 58s                    kube-proxy       
	  Normal  NodeAllocatableEnforced  2m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m55s (x8 over 2m55s)  kubelet          Node embed-certs-403962 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m55s (x8 over 2m55s)  kubelet          Node embed-certs-403962 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m55s (x7 over 2m55s)  kubelet          Node embed-certs-403962 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m55s                  kubelet          Starting kubelet.
	  Normal  Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m49s                  kubelet          Node embed-certs-403962 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m49s                  kubelet          Node embed-certs-403962 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m49s                  kubelet          Node embed-certs-403962 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m45s                  node-controller  Node embed-certs-403962 event: Registered Node embed-certs-403962 in Controller
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node embed-certs-403962 status is now: NodeHasSufficientMemory
	  Normal  Starting                 65s                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node embed-certs-403962 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x7 over 65s)      kubelet          Node embed-certs-403962 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  65s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           57s                    node-controller  Node embed-certs-403962 event: Registered Node embed-certs-403962 in Controller
	  Normal  Starting                 6s                     kubelet          Starting kubelet.
	  Normal  Starting                 5s                     kubelet          Starting kubelet.
	  Normal  Starting                 4s                     kubelet          Starting kubelet.
	  Normal  Starting                 3s                     kubelet          Starting kubelet.
	  Normal  Starting                 3s                     kubelet          Starting kubelet.
	  Normal  Starting                 2s                     kubelet          Starting kubelet.
	  Normal  Starting                 1s                     kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  1s                     kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  1s                     kubelet          Node embed-certs-403962 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    1s                     kubelet          Node embed-certs-403962 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     1s                     kubelet          Node embed-certs-403962 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.995983] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.504855] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500830] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.994952] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.505945] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501588] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.993278] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.507907] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500995] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.992251] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.509112] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501500] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989799] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.510643] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501653] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989383] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.511830] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500482] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989056] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.512088] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500241] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501649] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501291] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501422] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[Sep19 23:13] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	
	
	==> etcd [cf7db7dc6b4def457de8f1757d1f052269f241085897a0915ab05475e9007382] <==
	{"level":"warn","ts":"2025-09-19T23:11:09.188665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.198456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.205371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.213033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.219708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.226288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.232685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.240509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.247778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.255026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.262017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.269447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.276733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.284466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.292397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.299271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.306899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.314207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.322104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.329501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.342457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.346477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.353073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.360047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:11:09.416438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33376","server-name":"","error":"EOF"}
	
	
	==> etcd [e01a7e7e7cd1ef798fd87f8f0fdeba66a7500cc192b790f0549f24a37ef33988] <==
	{"level":"warn","ts":"2025-09-19T23:13:00.023924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.031706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.040492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.051616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.065636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.074997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.081805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.088472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.095349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.109843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.118599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.126611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.135370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.142426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.157573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.164474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:00.172034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59276","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T23:13:12.097690Z","caller":"traceutil/trace.go:172","msg":"trace[232139012] transaction","detail":"{read_only:false; response_revision:731; number_of_response:1; }","duration":"258.463892ms","start":"2025-09-19T23:13:11.839200Z","end":"2025-09-19T23:13:12.097664Z","steps":["trace[232139012] 'process raft request'  (duration: 226.719442ms)","trace[232139012] 'compare'  (duration: 31.635495ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:13:14.435808Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.147551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-t6v26\" limit:1 ","response":"range_response_count:1 size:5793"}
	{"level":"info","ts":"2025-09-19T23:13:14.435925Z","caller":"traceutil/trace.go:172","msg":"trace[295058405] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-t6v26; range_end:; response_count:1; response_revision:734; }","duration":"123.287502ms","start":"2025-09-19T23:13:14.312618Z","end":"2025-09-19T23:13:14.435906Z","steps":["trace[295058405] 'range keys from in-memory index tree'  (duration: 122.994923ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:13:30.060102Z","caller":"traceutil/trace.go:172","msg":"trace[1620432508] transaction","detail":"{read_only:false; response_revision:765; number_of_response:1; }","duration":"115.142546ms","start":"2025-09-19T23:13:29.944937Z","end":"2025-09-19T23:13:30.060080Z","steps":["trace[1620432508] 'process raft request'  (duration: 115.007955ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:13:30.170551Z","caller":"traceutil/trace.go:172","msg":"trace[2094453612] transaction","detail":"{read_only:false; response_revision:768; number_of_response:1; }","duration":"103.238707ms","start":"2025-09-19T23:13:30.067295Z","end":"2025-09-19T23:13:30.170534Z","steps":["trace[2094453612] 'process raft request'  (duration: 103.175902ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:13:30.170554Z","caller":"traceutil/trace.go:172","msg":"trace[2055278428] transaction","detail":"{read_only:false; response_revision:767; number_of_response:1; }","duration":"104.13866ms","start":"2025-09-19T23:13:30.066398Z","end":"2025-09-19T23:13:30.170536Z","steps":["trace[2055278428] 'process raft request'  (duration: 79.101521ms)","trace[2055278428] 'compare'  (duration: 24.840346ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:13:31.720121Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.909334ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638355411949758898 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-muifrzavafhuho37txlpqynjom\" mod_revision:756 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-muifrzavafhuho37txlpqynjom\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-muifrzavafhuho37txlpqynjom\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:13:31.720238Z","caller":"traceutil/trace.go:172","msg":"trace[395202177] transaction","detail":"{read_only:false; response_revision:771; number_of_response:1; }","duration":"185.400323ms","start":"2025-09-19T23:13:31.534819Z","end":"2025-09-19T23:13:31.720219Z","steps":["trace[395202177] 'process raft request'  (duration: 82.644458ms)","trace[395202177] 'compare'  (duration: 101.791309ms)"],"step_count":2}
	
	
	==> kernel <==
	 23:14:01 up  1:56,  0 users,  load average: 4.93, 3.90, 2.42
	Linux embed-certs-403962 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6044b48856573e666bf5ceb3935f92c8f868de2441a0ec3b09843b9492ce7bbf] <==
	I0919 23:11:19.174496       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0919 23:11:19.174681       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:11:19.174701       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:11:19.174728       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:11:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:11:19.369127       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:11:19.369149       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:11:19.369198       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:11:19.369379       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0919 23:11:49.369352       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0919 23:11:49.370693       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0919 23:11:49.370719       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0919 23:11:49.375335       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0919 23:11:50.769365       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:11:50.769493       1 metrics.go:72] Registering metrics
	I0919 23:11:50.769771       1 controller.go:711] "Syncing nftables rules"
	I0919 23:11:59.368671       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:11:59.368745       1 main.go:301] handling current node
	I0919 23:12:09.378230       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:12:09.378265       1 main.go:301] handling current node
	I0919 23:12:19.374248       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:12:19.374302       1 main.go:301] handling current node
	I0919 23:12:29.369279       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:12:29.369316       1 main.go:301] handling current node
	
	
	==> kindnet [d85d77e0cb95093df3a320c88fc83229cb7eea4b4c40ff52eafa9f2ab25a30d9] <==
	I0919 23:13:02.907547       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0919 23:13:02.908343       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0919 23:13:02.908553       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:13:02.908581       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:13:02.908612       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:13:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:13:03.204600       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:13:03.204621       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:13:03.204632       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:13:03.401837       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0919 23:13:03.804720       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:13:03.804753       1 metrics.go:72] Registering metrics
	I0919 23:13:03.804816       1 controller.go:711] "Syncing nftables rules"
	I0919 23:13:13.204462       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:13:13.204536       1 main.go:301] handling current node
	I0919 23:13:23.210251       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:13:23.210291       1 main.go:301] handling current node
	I0919 23:13:33.204317       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:13:33.204353       1 main.go:301] handling current node
	I0919 23:13:43.205277       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:13:43.205319       1 main.go:301] handling current node
	I0919 23:13:53.209554       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0919 23:13:53.209597       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bfd145fe58ffdf1763f41676fcd29f8a3ce82593ddd0832d7885c98305dfe78c] <==
	I0919 23:11:17.201114       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 23:11:17.602735       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 23:11:17.608103       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 23:11:17.952410       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 23:12:20.145022       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 23:12:31.927984       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 23:12:36.265781       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:48838: use of closed network connection
	I0919 23:12:37.057010       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 23:12:37.063803       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:12:37.063884       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0919 23:12:37.063959       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0919 23:12:37.154609       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.109.206.124"}
	W0919 23:12:37.165285       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:12:37.165340       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0919 23:12:37.171698       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:12:37.171767       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [fff89acc2e74b902f9e0c95e662765f6072a12aba84c1a2088fb3a0255f2b922] <==
	I0919 23:13:01.787141       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 23:13:01.898573       1 handler_proxy.go:99] no RequestInfo found in the context
	W0919 23:13:01.898626       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:13:01.898708       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:13:01.898731       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0919 23:13:01.898622       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 23:13:01.900097       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 23:13:04.617389       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 23:13:04.669450       1 controller.go:667] quota admission added evaluator for: endpoints
	I0919 23:13:04.866591       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 23:13:04.866591       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	{"level":"warn","ts":"2025-09-19T23:13:58.163404Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002cb2d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0919 23:13:58.163527       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: client disconnected" logger="UnhandledError"
	E0919 23:13:58.163562       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"client disconnected\"}: client disconnected" logger="UnhandledError"
	E0919 23:13:58.163535       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	E0919 23:13:58.163742       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="GET" URI="/api/v1/nodes/embed-certs-403962" auditID="a3a0ca7d-b48f-489b-9edd-6cafc265b21b"
	E0919 23:13:58.165302       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:13:58.165383       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 23:13:58.165317       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:13:58.166616       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:13:58.166722       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.929934ms" method="GET" path="/api/v1/nodes/embed-certs-403962" result=null
	E0919 23:13:58.166783       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.91751ms" method="GET" path="/apis/storage.k8s.io/v1/csinodes/embed-certs-403962" result=null
	
	
	==> kube-controller-manager [432944df07afe4fa031e21fb600de4c622c12827f2cba9267a85f5f1b177d65c] <==
	I0919 23:11:16.846520       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 23:11:16.846534       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 23:11:16.846667       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 23:11:16.847613       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 23:11:16.847629       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 23:11:16.847708       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 23:11:16.847709       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 23:11:16.847732       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0919 23:11:16.847788       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 23:11:16.847854       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 23:11:16.847945       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 23:11:16.848194       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 23:11:16.848583       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0919 23:11:16.849130       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 23:11:16.849202       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 23:11:16.849419       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 23:11:16.849517       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 23:11:16.849590       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-403962"
	I0919 23:11:16.849640       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 23:11:16.851859       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 23:11:16.852089       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0919 23:11:16.852259       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:11:16.853380       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 23:11:16.856890       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 23:11:16.868196       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [685fd68b08faf09a7264d900382bc219699f762de534a16181d8d0716c2a76da] <==
	I0919 23:13:04.263435       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 23:13:04.263464       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:13:04.263505       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 23:13:04.263519       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 23:13:04.263642       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0919 23:13:04.263823       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0919 23:13:04.263900       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 23:13:04.266086       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 23:13:04.269342       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0919 23:13:04.269401       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:13:04.269438       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 23:13:04.270699       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 23:13:04.270741       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 23:13:04.270802       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 23:13:04.273161       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0919 23:13:04.279533       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 23:13:04.279700       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 23:13:04.279842       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-403962"
	I0919 23:13:04.279915       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 23:13:04.282951       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 23:13:04.285224       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 23:13:04.286309       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 23:13:04.305110       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0919 23:13:34.277940       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:13:34.315857       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [3a603aaa7a1bcf96dec283c58dc48ba29ca8f42b1a92797b9ceba2493c3ab89c] <==
	I0919 23:13:02.364625       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:13:02.446950       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:13:02.547138       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:13:02.547195       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0919 23:13:02.547274       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:13:02.574325       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:13:02.574410       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:13:02.583670       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:13:02.584227       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:13:02.584255       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:13:02.585580       1 config.go:200] "Starting service config controller"
	I0919 23:13:02.585616       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:13:02.585651       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:13:02.586246       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:13:02.586604       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:13:02.586631       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:13:02.587219       1 config.go:309] "Starting node config controller"
	I0919 23:13:02.587236       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:13:02.587244       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:13:02.687240       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 23:13:02.687263       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 23:13:02.688408       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [c5049fc2e8ac91137c74843b7caa1255b1066b4f520bc630221be98343ed16fe] <==
	I0919 23:11:36.657767       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:11:36.727117       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:11:36.827661       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:11:36.827709       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0919 23:11:36.827818       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:11:36.852020       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:11:36.852071       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:11:36.858348       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:11:36.858858       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:11:36.858899       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:11:36.860507       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:11:36.860511       1 config.go:200] "Starting service config controller"
	I0919 23:11:36.860549       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:11:36.860561       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:11:36.860561       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:11:36.860592       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:11:36.860675       1 config.go:309] "Starting node config controller"
	I0919 23:11:36.860686       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:11:36.860692       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:11:36.960789       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 23:11:36.960823       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 23:11:36.960852       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3a9a8f6fc34ea35cec42c4a31ceded3d6a9e79dee3522f4ae207c04308111533] <==
	E0919 23:11:09.877917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 23:11:09.878218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 23:11:09.878431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 23:11:09.881043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 23:11:09.881208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 23:11:09.881420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 23:11:09.881516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 23:11:09.881930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 23:11:10.708036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 23:11:10.760670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 23:11:10.855505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 23:11:10.868279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 23:11:10.986598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 23:11:11.094498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 23:11:11.139674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 23:11:11.144274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 23:11:11.229400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 23:11:11.250284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 23:11:11.307607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 23:11:11.309384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 23:11:11.330528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 23:11:11.339083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 23:11:11.346994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 23:11:11.367514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I0919 23:11:12.672956       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [483f16593a289462268696d97f733dfd8ff651f4c461db8d3a0613e0aaa05534] <==
	I0919 23:12:58.147568       1 serving.go:386] Generated self-signed cert in-memory
	W0919 23:13:00.828331       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 23:13:00.828369       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 23:13:00.828381       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 23:13:00.828390       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 23:13:00.872292       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 23:13:00.872322       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:13:00.877323       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:13:00.877385       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:13:00.879018       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 23:13:00.879089       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 23:13:00.977614       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 23:14:00 embed-certs-403962 kubelet[3398]: I0919 23:14:00.624450    3398 kubelet_node_status.go:124] "Node was previously registered" node="embed-certs-403962"
	Sep 19 23:14:00 embed-certs-403962 kubelet[3398]: I0919 23:14:00.624564    3398 kubelet_node_status.go:78] "Successfully registered node" node="embed-certs-403962"
	Sep 19 23:14:00 embed-certs-403962 kubelet[3398]: I0919 23:14:00.624618    3398 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 23:14:00 embed-certs-403962 kubelet[3398]: I0919 23:14:00.625439    3398 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: I0919 23:14:01.381109    3398 apiserver.go:52] "Watching apiserver"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: I0919 23:14:01.392947    3398 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: I0919 23:14:01.409675    3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21e6f2c7-5351-4468-8e06-d1452e55ee9a-xtables-lock\") pod \"kube-proxy-5tf2s\" (UID: \"21e6f2c7-5351-4468-8e06-d1452e55ee9a\") " pod="kube-system/kube-proxy-5tf2s"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: I0919 23:14:01.409716    3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/67076ae5-98f0-45d1-b001-d046213f398f-cni-cfg\") pod \"kindnet-cfvvr\" (UID: \"67076ae5-98f0-45d1-b001-d046213f398f\") " pod="kube-system/kindnet-cfvvr"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: I0919 23:14:01.409733    3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67076ae5-98f0-45d1-b001-d046213f398f-xtables-lock\") pod \"kindnet-cfvvr\" (UID: \"67076ae5-98f0-45d1-b001-d046213f398f\") " pod="kube-system/kindnet-cfvvr"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: I0919 23:14:01.409906    3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21e6f2c7-5351-4468-8e06-d1452e55ee9a-lib-modules\") pod \"kube-proxy-5tf2s\" (UID: \"21e6f2c7-5351-4468-8e06-d1452e55ee9a\") " pod="kube-system/kube-proxy-5tf2s"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: I0919 23:14:01.409923    3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67076ae5-98f0-45d1-b001-d046213f398f-lib-modules\") pod \"kindnet-cfvvr\" (UID: \"67076ae5-98f0-45d1-b001-d046213f398f\") " pod="kube-system/kindnet-cfvvr"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: I0919 23:14:01.409971    3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/79e176eb-eb0a-449d-be1c-b1c4347e2cf6-tmp\") pod \"storage-provisioner\" (UID: \"79e176eb-eb0a-449d-be1c-b1c4347e2cf6\") " pod="kube-system/storage-provisioner"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: I0919 23:14:01.471528    3398 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-embed-certs-403962"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: I0919 23:14:01.471646    3398 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-embed-certs-403962"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: I0919 23:14:01.471979    3398 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-embed-certs-403962"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: I0919 23:14:01.472283    3398 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-embed-certs-403962"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: E0919 23:14:01.484420    3398 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-embed-certs-403962\" already exists" pod="kube-system/kube-scheduler-embed-certs-403962"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: E0919 23:14:01.489266    3398 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-403962\" already exists" pod="kube-system/kube-apiserver-embed-certs-403962"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: E0919 23:14:01.489919    3398 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-embed-certs-403962\" already exists" pod="kube-system/kube-controller-manager-embed-certs-403962"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: E0919 23:14:01.490235    3398 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-403962\" already exists" pod="kube-system/etcd-embed-certs-403962"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: I0919 23:14:01.687520    3398 scope.go:117] "RemoveContainer" containerID="a290868582b05cdfa19fbdf1cee0e75f43202233655b3841965c6c18a15540d2"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: E0919 23:14:01.737522    3398 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: E0919 23:14:01.737629    3398 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: E0919 23:14:01.737788    3398 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-g24nt_kube-system(082f2139-7bab-4b6e-8720-a11b913178b1): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" logger="UnhandledError"
	Sep 19 23:14:01 embed-certs-403962 kubelet[3398]: E0919 23:14:01.737828    3398 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-g24nt" podUID="082f2139-7bab-4b6e-8720-a11b913178b1"
	
	
	==> kubernetes-dashboard [47aa58544505ade6f4c115edc0098bf5f68e4a216f0c77407e15d893f7455d61] <==
	2025/09/19 23:13:14 Starting overwatch
	2025/09/19 23:13:14 Using namespace: kubernetes-dashboard
	2025/09/19 23:13:14 Using in-cluster config to connect to apiserver
	2025/09/19 23:13:14 Using secret token for csrf signing
	2025/09/19 23:13:14 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 23:13:14 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/19 23:13:14 Successful initial request to the apiserver, version: v1.34.0
	2025/09/19 23:13:14 Generating JWE encryption key
	2025/09/19 23:13:14 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/19 23:13:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/19 23:13:14 Initializing JWE encryption key from synchronized object
	2025/09/19 23:13:14 Creating in-cluster Sidecar client
	2025/09/19 23:13:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:13:14 Serving insecurely on HTTP port: 9090
	2025/09/19 23:13:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4563e410dc0f252652afbc6f753a6803481cab963203e75bf937ef3a03215571] <==
	I0919 23:13:46.748476       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 23:13:46.756538       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 23:13:46.756586       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0919 23:13:46.759328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:50.214902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:55.561544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:13:59.161303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ea70f76b17b6e8157ddbc228b4f91d0b0061b96d5691b7517b5b27a70e7700f0] <==
	I0919 23:13:02.362002       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 23:13:32.364694       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-403962 -n embed-certs-403962
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-403962 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-g24nt
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-403962 describe pod metrics-server-746fcd58dc-g24nt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-403962 describe pod metrics-server-746fcd58dc-g24nt: exit status 1 (111.306258ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-g24nt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-403962 describe pod metrics-server-746fcd58dc-g24nt: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (9.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-312465 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-312465 -n newest-cni-312465
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-312465 -n newest-cni-312465: exit status 2 (397.118565ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-312465 -n newest-cni-312465
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-312465 -n newest-cni-312465: exit status 2 (380.288252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-312465 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-312465 -n newest-cni-312465
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-312465 -n newest-cni-312465: exit status 2 (335.705135ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-312465 -n newest-cni-312465
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-312465 -n newest-cni-312465: exit status 2 (342.594ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-312465
helpers_test.go:243: (dbg) docker inspect newest-cni-312465:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2da8ead24bf634d8f6a729243d038d798fafa21ac7bf909ed9bba2f621fc3b69",
	        "Created": "2025-09-19T23:13:32.170200572Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 316984,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T23:14:07.83636793Z",
	            "FinishedAt": "2025-09-19T23:14:05.581387247Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/2da8ead24bf634d8f6a729243d038d798fafa21ac7bf909ed9bba2f621fc3b69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2da8ead24bf634d8f6a729243d038d798fafa21ac7bf909ed9bba2f621fc3b69/hostname",
	        "HostsPath": "/var/lib/docker/containers/2da8ead24bf634d8f6a729243d038d798fafa21ac7bf909ed9bba2f621fc3b69/hosts",
	        "LogPath": "/var/lib/docker/containers/2da8ead24bf634d8f6a729243d038d798fafa21ac7bf909ed9bba2f621fc3b69/2da8ead24bf634d8f6a729243d038d798fafa21ac7bf909ed9bba2f621fc3b69-json.log",
	        "Name": "/newest-cni-312465",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-312465:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-312465",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2da8ead24bf634d8f6a729243d038d798fafa21ac7bf909ed9bba2f621fc3b69",
	                "LowerDir": "/var/lib/docker/overlay2/6d62e2f859b958fea90eb7b70c32d05ee58d2721ea59c3f0af602c2e8c0e8707-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6d62e2f859b958fea90eb7b70c32d05ee58d2721ea59c3f0af602c2e8c0e8707/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6d62e2f859b958fea90eb7b70c32d05ee58d2721ea59c3f0af602c2e8c0e8707/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6d62e2f859b958fea90eb7b70c32d05ee58d2721ea59c3f0af602c2e8c0e8707/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-312465",
	                "Source": "/var/lib/docker/volumes/newest-cni-312465/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-312465",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-312465",
	                "name.minikube.sigs.k8s.io": "newest-cni-312465",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9ab14afe362fa4be0457cf5b1c00525ca82f33eaae541a71efa68e2c4f58cbe8",
	            "SandboxKey": "/var/run/docker/netns/9ab14afe362f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-312465": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:e3:1c:fd:f5:ae",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ecb131f9ec1b2c372dfdf9b0ed72aaad0b8b0fc77db2fbc20949c0f4dfc0485e",
	                    "EndpointID": "294103dc868b470b92ce317bb9d64dd2a19fe0b311453063a6f9f2da355e591c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-312465",
	                        "2da8ead24bf6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-312465 -n newest-cni-312465
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-312465 -n newest-cni-312465: exit status 2 (370.18242ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-312465 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-312465 logs -n 25: (1.897198744s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ addons  │ enable dashboard -p embed-certs-403962 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-403962        │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ start   │ -p embed-certs-403962 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-403962        │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:13 UTC │
	│ start   │ -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-430859 │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-430859 │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ delete  │ -p kubernetes-upgrade-430859                                                                                                                                                                                                                        │ kubernetes-upgrade-430859 │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ start   │ -p newest-cni-312465 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-312465         │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:14 UTC │
	│ image   │ no-preload-364197 image list --format=json                                                                                                                                                                                                          │ no-preload-364197         │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ pause   │ -p no-preload-364197 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-364197         │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ unpause │ -p no-preload-364197 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-364197         │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ image   │ embed-certs-403962 image list --format=json                                                                                                                                                                                                         │ embed-certs-403962        │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ pause   │ -p embed-certs-403962 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-403962        │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ unpause │ -p embed-certs-403962 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-403962        │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ delete  │ -p no-preload-364197                                                                                                                                                                                                                                │ no-preload-364197         │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:14 UTC │
	│ delete  │ -p no-preload-364197                                                                                                                                                                                                                                │ no-preload-364197         │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ start   │ -p auto-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │                     │
	│ delete  │ -p embed-certs-403962                                                                                                                                                                                                                               │ embed-certs-403962        │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-312465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-312465         │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ stop    │ -p newest-cni-312465 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-312465         │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ delete  │ -p embed-certs-403962                                                                                                                                                                                                                               │ embed-certs-403962        │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ addons  │ enable dashboard -p newest-cni-312465 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-312465         │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ start   │ -p newest-cni-312465 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-312465         │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ start   │ -p kindnet-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd                                                                                                      │ kindnet-896447            │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │                     │
	│ image   │ newest-cni-312465 image list --format=json                                                                                                                                                                                                          │ newest-cni-312465         │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ pause   │ -p newest-cni-312465 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-312465         │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ unpause │ -p newest-cni-312465 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-312465         │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:14:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:14:07.545450  316421 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:14:07.545589  316421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:14:07.545601  316421 out.go:374] Setting ErrFile to fd 2...
	I0919 23:14:07.545607  316421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:14:07.545908  316421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 23:14:07.546484  316421 out.go:368] Setting JSON to false
	I0919 23:14:07.547671  316421 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6992,"bootTime":1758316656,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:14:07.547785  316421 start.go:140] virtualization: kvm guest
	I0919 23:14:07.549879  316421 out.go:179] * [kindnet-896447] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:14:07.552990  316421 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:14:07.553022  316421 notify.go:220] Checking for updates...
	I0919 23:14:07.559610  316421 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:14:07.561382  316421 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:14:07.566189  316421 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 23:14:07.568116  316421 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:14:07.570024  316421 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:14:07.546807  316407 config.go:182] Loaded profile config "newest-cni-312465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:14:07.547363  316407 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:14:07.577693  316407 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:14:07.577797  316407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:14:07.648780  316407 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:68 SystemTime:2025-09-19 23:14:07.636150467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:14:07.648963  316407 docker.go:318] overlay module found
	I0919 23:14:07.652148  316407 out.go:179] * Using the docker driver based on existing profile
	I0919 23:14:07.572300  316421 config.go:182] Loaded profile config "auto-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:14:07.572495  316421 config.go:182] Loaded profile config "default-k8s-diff-port-149888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:14:07.572664  316421 config.go:182] Loaded profile config "newest-cni-312465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:14:07.572820  316421 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:14:07.600815  316421 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:14:07.600925  316421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:14:07.680865  316421 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:68 SystemTime:2025-09-19 23:14:07.665771969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:14:07.681124  316421 docker.go:318] overlay module found
	I0919 23:14:07.688792  316421 out.go:179] * Using the docker driver based on user configuration
	I0919 23:14:07.655140  316407 start.go:304] selected driver: docker
	I0919 23:14:07.655198  316407 start.go:918] validating driver "docker" against &{Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:14:07.655339  316407 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:14:07.655999  316407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:14:07.737538  316407 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:68 SystemTime:2025-09-19 23:14:07.72575158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:14:07.737821  316407 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0919 23:14:07.737843  316407 cni.go:84] Creating CNI manager for ""
	I0919 23:14:07.737895  316407 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:14:07.737941  316407 start.go:348] cluster config:
	{Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:14:07.741137  316407 out.go:179] * Starting "newest-cni-312465" primary control-plane node in "newest-cni-312465" cluster
	I0919 23:14:07.742815  316407 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 23:14:07.744122  316407 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:14:07.689907  316421 start.go:304] selected driver: docker
	I0919 23:14:07.689929  316421 start.go:918] validating driver "docker" against <nil>
	I0919 23:14:07.689954  316421 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:14:07.690652  316421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:14:07.767818  316421 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:false NGoroutines:80 SystemTime:2025-09-19 23:14:07.754934801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:14:07.768061  316421 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 23:14:07.768365  316421 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:14:07.770670  316421 out.go:179] * Using Docker driver with root privileges
	I0919 23:14:07.771969  316421 cni.go:84] Creating CNI manager for "kindnet"
	I0919 23:14:07.771993  316421 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 23:14:07.772099  316421 start.go:348] cluster config:
	{Name:kindnet-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I0919 23:14:07.774357  316421 out.go:179] * Starting "kindnet-896447" primary control-plane node in "kindnet-896447" cluster
	I0919 23:14:07.775668  316421 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 23:14:07.776953  316421 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:14:07.745565  316407 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:14:07.745615  316407 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 23:14:07.745624  316407 cache.go:58] Caching tarball of preloaded images
	I0919 23:14:07.745696  316407 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:14:07.745713  316407 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 23:14:07.745724  316407 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 23:14:07.745887  316407 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/config.json ...
	I0919 23:14:07.773705  316407 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:14:07.773726  316407 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:14:07.773767  316407 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:14:07.773797  316407 start.go:360] acquireMachinesLock for newest-cni-312465: {Name:mkdaed0f91b48ccb0806887f4c48e7b6207e9286 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:14:07.773868  316407 start.go:364] duration metric: took 45.525µs to acquireMachinesLock for "newest-cni-312465"
	I0919 23:14:07.773892  316407 start.go:96] Skipping create...Using existing machine configuration
	I0919 23:14:07.773898  316407 fix.go:54] fixHost starting: 
	I0919 23:14:07.774109  316407 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:14:07.796796  316407 fix.go:112] recreateIfNeeded on newest-cni-312465: state=Stopped err=<nil>
	W0919 23:14:07.796850  316407 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 23:14:07.778192  316421 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:14:07.778230  316421 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:14:07.778236  316421 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 23:14:07.778280  316421 cache.go:58] Caching tarball of preloaded images
	I0919 23:14:07.778375  316421 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 23:14:07.778387  316421 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 23:14:07.778510  316421 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/config.json ...
	I0919 23:14:07.778540  316421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/config.json: {Name:mkfc753d97a896ef89666bc40d14195b2cd88207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:07.804596  316421 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:14:07.804618  316421 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:14:07.804637  316421 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:14:07.804746  316421 start.go:360] acquireMachinesLock for kindnet-896447: {Name:mke345f56beddc08f221f0e34bb3ed88e95b38fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:14:07.804878  316421 start.go:364] duration metric: took 107.11µs to acquireMachinesLock for "kindnet-896447"
	I0919 23:14:07.804913  316421 start.go:93] Provisioning new machine with config: &{Name:kindnet-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:14:07.805025  316421 start.go:125] createHost starting for "" (driver="docker")
	I0919 23:14:03.138342  314456 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 23:14:03.138581  314456 start.go:159] libmachine.API.Create for "auto-896447" (driver="docker")
	I0919 23:14:03.138611  314456 client.go:168] LocalClient.Create starting
	I0919 23:14:03.138723  314456 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 23:14:03.138757  314456 main.go:141] libmachine: Decoding PEM data...
	I0919 23:14:03.138767  314456 main.go:141] libmachine: Parsing certificate...
	I0919 23:14:03.138818  314456 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 23:14:03.138833  314456 main.go:141] libmachine: Decoding PEM data...
	I0919 23:14:03.138841  314456 main.go:141] libmachine: Parsing certificate...
	I0919 23:14:03.139221  314456 cli_runner.go:164] Run: docker network inspect auto-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 23:14:03.163836  314456 cli_runner.go:211] docker network inspect auto-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 23:14:03.163916  314456 network_create.go:284] running [docker network inspect auto-896447] to gather additional debugging logs...
	I0919 23:14:03.163947  314456 cli_runner.go:164] Run: docker network inspect auto-896447
	W0919 23:14:03.187238  314456 cli_runner.go:211] docker network inspect auto-896447 returned with exit code 1
	I0919 23:14:03.187271  314456 network_create.go:287] error running [docker network inspect auto-896447]: docker network inspect auto-896447: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-896447 not found
	I0919 23:14:03.187284  314456 network_create.go:289] output of [docker network inspect auto-896447]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-896447 not found
	
	** /stderr **
	I0919 23:14:03.187380  314456 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:14:03.225659  314456 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-465af21e2d8d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:b5:e0:20:10:48} reservation:<nil>}
	I0919 23:14:03.226784  314456 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd9dd989b948 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:ff:32:51:a1:28} reservation:<nil>}
	I0919 23:14:03.227938  314456 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d5cd7c460be9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:d8:0f:d3:49:b9} reservation:<nil>}
	I0919 23:14:03.229281  314456 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-eeb244b5b4d9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:19:45:7a:f8:43} reservation:<nil>}
	I0919 23:14:03.230695  314456 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000491960}
	I0919 23:14:03.230725  314456 network_create.go:124] attempt to create docker network auto-896447 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0919 23:14:03.230780  314456 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-896447 auto-896447
	I0919 23:14:03.312517  314456 network_create.go:108] docker network auto-896447 192.168.85.0/24 created
	I0919 23:14:03.312557  314456 kic.go:121] calculated static IP "192.168.85.2" for the "auto-896447" container
	I0919 23:14:03.312645  314456 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 23:14:03.338796  314456 cli_runner.go:164] Run: docker volume create auto-896447 --label name.minikube.sigs.k8s.io=auto-896447 --label created_by.minikube.sigs.k8s.io=true
	I0919 23:14:03.379021  314456 oci.go:103] Successfully created a docker volume auto-896447
	I0919 23:14:03.379332  314456 cli_runner.go:164] Run: docker run --rm --name auto-896447-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-896447 --entrypoint /usr/bin/test -v auto-896447:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 23:14:03.920792  314456 oci.go:107] Successfully prepared a docker volume auto-896447
	I0919 23:14:03.920827  314456 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:14:03.920851  314456 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 23:14:03.920918  314456 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-896447:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 23:14:07.233839  314456 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-896447:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.312873654s)
	I0919 23:14:07.233870  314456 kic.go:203] duration metric: took 3.313016518s to extract preloaded images to volume ...
	W0919 23:14:07.233954  314456 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 23:14:07.233982  314456 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 23:14:07.234020  314456 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 23:14:07.307662  314456 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-896447 --name auto-896447 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-896447 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-896447 --network auto-896447 --ip 192.168.85.2 --volume auto-896447:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 23:14:07.640242  314456 cli_runner.go:164] Run: docker container inspect auto-896447 --format={{.State.Running}}
	I0919 23:14:07.667399  314456 cli_runner.go:164] Run: docker container inspect auto-896447 --format={{.State.Status}}
	I0919 23:14:07.694975  314456 cli_runner.go:164] Run: docker exec auto-896447 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:14:07.765646  314456 oci.go:144] the created container "auto-896447" has a running status.
	I0919 23:14:07.765682  314456 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/auto-896447/id_rsa...
	I0919 23:14:05.235334  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:14:05.235366  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:14:05.235374  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:14:05.235383  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:14:05.235391  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:14:05.235397  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:14:05.235405  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:14:05.235410  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:14:05.235416  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:14:05.235443  294587 retry.go:31] will retry after 6.715487454s: missing components: kube-dns
	I0919 23:14:07.871215  314456 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/auto-896447/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:14:07.911234  314456 cli_runner.go:164] Run: docker container inspect auto-896447 --format={{.State.Status}}
	I0919 23:14:07.948482  314456 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:14:07.948507  314456 kic_runner.go:114] Args: [docker exec --privileged auto-896447 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:14:08.029197  314456 cli_runner.go:164] Run: docker container inspect auto-896447 --format={{.State.Status}}
	I0919 23:14:08.055498  314456 machine.go:93] provisionDockerMachine start ...
	I0919 23:14:08.055605  314456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-896447
	I0919 23:14:08.088840  314456 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:08.089645  314456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I0919 23:14:08.089689  314456 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:14:08.239460  314456 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-896447
	
	I0919 23:14:08.239490  314456 ubuntu.go:182] provisioning hostname "auto-896447"
	I0919 23:14:08.239558  314456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-896447
	I0919 23:14:08.266255  314456 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:08.266566  314456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I0919 23:14:08.266593  314456 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-896447 && echo "auto-896447" | sudo tee /etc/hostname
	I0919 23:14:08.435368  314456 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-896447
	
	I0919 23:14:08.435449  314456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-896447
	I0919 23:14:08.468455  314456 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:08.468769  314456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I0919 23:14:08.469004  314456 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-896447' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-896447/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-896447' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:14:08.631462  314456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:14:08.631506  314456 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 23:14:08.631533  314456 ubuntu.go:190] setting up certificates
	I0919 23:14:08.631546  314456 provision.go:84] configureAuth start
	I0919 23:14:08.631611  314456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-896447
	I0919 23:14:08.653148  314456 provision.go:143] copyHostCerts
	I0919 23:14:08.653239  314456 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 23:14:08.653256  314456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 23:14:08.653351  314456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 23:14:08.653474  314456 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 23:14:08.653487  314456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 23:14:08.653529  314456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 23:14:08.653611  314456 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 23:14:08.653624  314456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 23:14:08.653664  314456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 23:14:08.653748  314456 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.auto-896447 san=[127.0.0.1 192.168.85.2 auto-896447 localhost minikube]
	I0919 23:14:09.278100  314456 provision.go:177] copyRemoteCerts
	I0919 23:14:09.278176  314456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:14:09.278231  314456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-896447
	I0919 23:14:09.301359  314456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/auto-896447/id_rsa Username:docker}
	I0919 23:14:09.403022  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 23:14:09.434598  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0919 23:14:09.462442  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 23:14:09.491772  314456 provision.go:87] duration metric: took 860.211434ms to configureAuth
	I0919 23:14:09.491798  314456 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:14:09.491935  314456 config.go:182] Loaded profile config "auto-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:14:09.491946  314456 machine.go:96] duration metric: took 1.436426219s to provisionDockerMachine
	I0919 23:14:09.491952  314456 client.go:171] duration metric: took 6.353335489s to LocalClient.Create
	I0919 23:14:09.491969  314456 start.go:167] duration metric: took 6.35338915s to libmachine.API.Create "auto-896447"
	I0919 23:14:09.491978  314456 start.go:293] postStartSetup for "auto-896447" (driver="docker")
	I0919 23:14:09.491985  314456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:14:09.492030  314456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:14:09.492068  314456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-896447
	I0919 23:14:09.512712  314456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/auto-896447/id_rsa Username:docker}
	I0919 23:14:09.655581  314456 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:14:09.660004  314456 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:14:09.660046  314456 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:14:09.660058  314456 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:14:09.660067  314456 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:14:09.660080  314456 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 23:14:09.660170  314456 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 23:14:09.660277  314456 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 23:14:09.660445  314456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:14:09.672252  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:14:09.761837  314456 start.go:296] duration metric: took 269.845026ms for postStartSetup
	I0919 23:14:09.817334  314456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-896447
	I0919 23:14:09.840461  314456 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/config.json ...
	I0919 23:14:09.881702  314456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:14:09.881770  314456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-896447
	I0919 23:14:09.903252  314456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/auto-896447/id_rsa Username:docker}
	I0919 23:14:09.998022  314456 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:14:10.004074  314456 start.go:128] duration metric: took 6.869525965s to createHost
	I0919 23:14:10.004113  314456 start.go:83] releasing machines lock for "auto-896447", held for 6.869689556s
	I0919 23:14:10.004324  314456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-896447
	I0919 23:14:10.032708  314456 ssh_runner.go:195] Run: cat /version.json
	I0919 23:14:10.032764  314456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:14:10.032767  314456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-896447
	I0919 23:14:10.032841  314456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-896447
	I0919 23:14:10.057994  314456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/auto-896447/id_rsa Username:docker}
	I0919 23:14:10.058468  314456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/auto-896447/id_rsa Username:docker}
	I0919 23:14:10.247200  314456 ssh_runner.go:195] Run: systemctl --version
	I0919 23:14:10.253261  314456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:14:10.259353  314456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:14:10.760782  314456 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:14:10.760874  314456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:14:11.008847  314456 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:14:11.008872  314456 start.go:495] detecting cgroup driver to use...
	I0919 23:14:11.008907  314456 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:14:11.008956  314456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 23:14:11.025090  314456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:14:11.040151  314456 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:14:11.040238  314456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:14:11.060252  314456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:14:11.078289  314456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:14:11.147399  314456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:14:11.303364  314456 docker.go:234] disabling docker service ...
	I0919 23:14:11.303457  314456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:14:11.326400  314456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:14:11.340322  314456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:14:11.484467  314456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:14:11.561333  314456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:14:11.574426  314456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:14:11.595678  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:14:11.685978  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:14:11.743946  314456 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:14:11.744025  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:14:11.813228  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:14:11.827261  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:14:11.840017  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:14:11.852548  314456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:14:11.868315  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:14:11.884044  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:14:11.899639  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:14:11.912487  314456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:14:11.924008  314456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:14:11.935620  314456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:14:12.012697  314456 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:14:12.157503  314456 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 23:14:12.157576  314456 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 23:14:12.162651  314456 start.go:563] Will wait 60s for crictl version
	I0919 23:14:12.162719  314456 ssh_runner.go:195] Run: which crictl
	I0919 23:14:12.166826  314456 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:14:12.206011  314456 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 23:14:12.206090  314456 ssh_runner.go:195] Run: containerd --version
	I0919 23:14:12.233985  314456 ssh_runner.go:195] Run: containerd --version
	I0919 23:14:12.268850  314456 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 23:14:07.799109  316407 out.go:252] * Restarting existing docker container for "newest-cni-312465" ...
	I0919 23:14:07.799250  316407 cli_runner.go:164] Run: docker start newest-cni-312465
	I0919 23:14:08.165502  316407 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:14:08.191333  316407 kic.go:430] container "newest-cni-312465" state is running.
	I0919 23:14:08.191985  316407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:14:08.220318  316407 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/config.json ...
	I0919 23:14:08.220619  316407 machine.go:93] provisionDockerMachine start ...
	I0919 23:14:08.220704  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:08.249108  316407 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:08.249544  316407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I0919 23:14:08.249575  316407 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:14:08.250359  316407 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41604->127.0.0.1:33109: read: connection reset by peer
	I0919 23:14:11.394931  316407 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-312465
	
	I0919 23:14:11.394970  316407 ubuntu.go:182] provisioning hostname "newest-cni-312465"
	I0919 23:14:11.395023  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:11.416702  316407 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:11.416943  316407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I0919 23:14:11.416972  316407 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-312465 && echo "newest-cni-312465" | sudo tee /etc/hostname
	I0919 23:14:11.580189  316407 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-312465
	
	I0919 23:14:11.580280  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:11.602904  316407 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:11.603213  316407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I0919 23:14:11.603249  316407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-312465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-312465/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-312465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:14:11.746229  316407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:14:11.746263  316407 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 23:14:11.746299  316407 ubuntu.go:190] setting up certificates
	I0919 23:14:11.746314  316407 provision.go:84] configureAuth start
	I0919 23:14:11.746382  316407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:14:11.766971  316407 provision.go:143] copyHostCerts
	I0919 23:14:11.767027  316407 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 23:14:11.767039  316407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 23:14:11.809339  316407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 23:14:11.809538  316407 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 23:14:11.809553  316407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 23:14:11.809590  316407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 23:14:11.809685  316407 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 23:14:11.809695  316407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 23:14:11.809720  316407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 23:14:11.809787  316407 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.newest-cni-312465 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-312465]
	I0919 23:14:11.886967  316407 provision.go:177] copyRemoteCerts
	I0919 23:14:11.887021  316407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:14:11.887187  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:11.910808  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:12.013119  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 23:14:12.045568  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0919 23:14:12.080432  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:14:12.113912  316407 provision.go:87] duration metric: took 367.58246ms to configureAuth
	I0919 23:14:12.113947  316407 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:14:12.114239  316407 config.go:182] Loaded profile config "newest-cni-312465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:14:12.114261  316407 machine.go:96] duration metric: took 3.893618945s to provisionDockerMachine
	I0919 23:14:12.114272  316407 start.go:293] postStartSetup for "newest-cni-312465" (driver="docker")
	I0919 23:14:12.114286  316407 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:14:12.114352  316407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:14:12.114401  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:12.138333  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:12.243253  316407 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:14:12.248521  316407 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:14:12.248559  316407 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:14:12.248626  316407 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:14:12.248650  316407 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:14:12.248668  316407 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 23:14:12.248746  316407 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 23:14:12.248850  316407 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 23:14:12.248986  316407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:14:12.261592  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:14:12.296995  316407 start.go:296] duration metric: took 182.703774ms for postStartSetup
	I0919 23:14:12.297110  316407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:14:12.297172  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:12.320307  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:12.423965  316407 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:14:12.429220  316407 fix.go:56] duration metric: took 4.655313325s for fixHost
	I0919 23:14:12.429248  316407 start.go:83] releasing machines lock for "newest-cni-312465", held for 4.655366677s
	I0919 23:14:12.429319  316407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:14:12.452078  316407 ssh_runner.go:195] Run: cat /version.json
	I0919 23:14:12.452135  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:12.452446  316407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:14:12.452528  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:12.476354  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:12.477830  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:07.810675  316421 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 23:14:07.811012  316421 start.go:159] libmachine.API.Create for "kindnet-896447" (driver="docker")
	I0919 23:14:07.811053  316421 client.go:168] LocalClient.Create starting
	I0919 23:14:07.811166  316421 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 23:14:07.811216  316421 main.go:141] libmachine: Decoding PEM data...
	I0919 23:14:07.811246  316421 main.go:141] libmachine: Parsing certificate...
	I0919 23:14:07.811308  316421 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 23:14:07.811332  316421 main.go:141] libmachine: Decoding PEM data...
	I0919 23:14:07.811348  316421 main.go:141] libmachine: Parsing certificate...
	I0919 23:14:07.811810  316421 cli_runner.go:164] Run: docker network inspect kindnet-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 23:14:07.839117  316421 cli_runner.go:211] docker network inspect kindnet-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 23:14:07.839236  316421 network_create.go:284] running [docker network inspect kindnet-896447] to gather additional debugging logs...
	I0919 23:14:07.839256  316421 cli_runner.go:164] Run: docker network inspect kindnet-896447
	W0919 23:14:07.863663  316421 cli_runner.go:211] docker network inspect kindnet-896447 returned with exit code 1
	I0919 23:14:07.863691  316421 network_create.go:287] error running [docker network inspect kindnet-896447]: docker network inspect kindnet-896447: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-896447 not found
	I0919 23:14:07.863703  316421 network_create.go:289] output of [docker network inspect kindnet-896447]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-896447 not found
	
	** /stderr **
	I0919 23:14:07.863830  316421 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:14:07.891284  316421 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-465af21e2d8d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:b5:e0:20:10:48} reservation:<nil>}
	I0919 23:14:07.892295  316421 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd9dd989b948 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:ff:32:51:a1:28} reservation:<nil>}
	I0919 23:14:07.893272  316421 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d5cd7c460be9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:d8:0f:d3:49:b9} reservation:<nil>}
	I0919 23:14:07.894477  316421 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c37c80}
	I0919 23:14:07.894541  316421 network_create.go:124] attempt to create docker network kindnet-896447 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0919 23:14:07.894603  316421 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-896447 kindnet-896447
	I0919 23:14:08.010898  316421 network_create.go:108] docker network kindnet-896447 192.168.76.0/24 created
	I0919 23:14:08.010930  316421 kic.go:121] calculated static IP "192.168.76.2" for the "kindnet-896447" container
	I0919 23:14:08.010989  316421 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 23:14:08.037265  316421 cli_runner.go:164] Run: docker volume create kindnet-896447 --label name.minikube.sigs.k8s.io=kindnet-896447 --label created_by.minikube.sigs.k8s.io=true
	I0919 23:14:08.065138  316421 oci.go:103] Successfully created a docker volume kindnet-896447
	I0919 23:14:08.065231  316421 cli_runner.go:164] Run: docker run --rm --name kindnet-896447-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-896447 --entrypoint /usr/bin/test -v kindnet-896447:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 23:14:08.576800  316421 oci.go:107] Successfully prepared a docker volume kindnet-896447
	I0919 23:14:08.576842  316421 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:14:08.576864  316421 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 23:14:08.576953  316421 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-896447:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 23:14:11.837198  316421 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-896447:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.260170064s)
	I0919 23:14:11.837237  316421 kic.go:203] duration metric: took 3.260370186s to extract preloaded images to volume ...
	W0919 23:14:11.837327  316421 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 23:14:11.837360  316421 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 23:14:11.837394  316421 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 23:14:11.904498  316421 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-896447 --name kindnet-896447 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-896447 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-896447 --network kindnet-896447 --ip 192.168.76.2 --volume kindnet-896447:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 23:14:12.240562  316421 cli_runner.go:164] Run: docker container inspect kindnet-896447 --format={{.State.Running}}
	I0919 23:14:12.262399  316421 cli_runner.go:164] Run: docker container inspect kindnet-896447 --format={{.State.Status}}
	I0919 23:14:12.286010  316421 cli_runner.go:164] Run: docker exec kindnet-896447 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:14:12.342350  316421 oci.go:144] the created container "kindnet-896447" has a running status.
	I0919 23:14:12.342386  316421 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/kindnet-896447/id_rsa...
	I0919 23:14:12.270695  314456 cli_runner.go:164] Run: docker network inspect auto-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:14:12.292050  314456 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0919 23:14:12.297258  314456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:14:12.313054  314456 kubeadm.go:875] updating cluster {Name:auto-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:auto-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:14:12.313261  314456 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:14:12.313333  314456 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:14:12.367198  314456 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:14:12.367225  314456 containerd.go:534] Images already preloaded, skipping extraction
	I0919 23:14:12.367330  314456 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:14:12.411226  314456 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:14:12.411257  314456 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:14:12.411268  314456 kubeadm.go:926] updating node { 192.168.85.2 8443 v1.34.0 containerd true true} ...
	I0919 23:14:12.411411  314456 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-896447 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:auto-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:14:12.411481  314456 ssh_runner.go:195] Run: sudo crictl info
	I0919 23:14:12.456765  314456 cni.go:84] Creating CNI manager for ""
	I0919 23:14:12.456792  314456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:14:12.456811  314456 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:14:12.456838  314456 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-896447 NodeName:auto-896447 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:14:12.457025  314456 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "auto-896447"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:14:12.457105  314456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:14:12.470628  314456 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:14:12.470705  314456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:14:12.485818  314456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0919 23:14:12.517031  314456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:14:12.549426  314456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I0919 23:14:12.581974  314456 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:14:12.586188  314456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:14:12.607875  314456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:14:12.716116  314456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:14:12.734308  314456 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447 for IP: 192.168.85.2
	I0919 23:14:12.734330  314456 certs.go:194] generating shared ca certs ...
	I0919 23:14:12.734349  314456 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:12.734528  314456 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 23:14:12.734596  314456 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 23:14:12.734608  314456 certs.go:256] generating profile certs ...
	I0919 23:14:12.734682  314456 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/client.key
	I0919 23:14:12.734697  314456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/client.crt with IP's: []
	I0919 23:14:11.958113  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:14:11.958169  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:14:11.958180  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:14:11.958189  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:14:11.958195  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:14:11.958203  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:14:11.958212  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:14:11.958218  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:14:11.958226  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:14:11.958246  294587 retry.go:31] will retry after 7.983039916s: missing components: kube-dns
	I0919 23:14:12.691453  316407 ssh_runner.go:195] Run: systemctl --version
	I0919 23:14:12.696767  316407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:14:12.702814  316407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:14:12.725532  316407 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:14:12.725633  316407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:14:12.740053  316407 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 23:14:12.740094  316407 start.go:495] detecting cgroup driver to use...
	I0919 23:14:12.740142  316407 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:14:12.740209  316407 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 23:14:12.760300  316407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:14:12.777831  316407 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:14:12.777900  316407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:14:12.797565  316407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:14:12.811907  316407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:14:12.886144  316407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:14:12.970365  316407 docker.go:234] disabling docker service ...
	I0919 23:14:12.970436  316407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:14:12.986464  316407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:14:13.001386  316407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:14:13.093150  316407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:14:13.174857  316407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:14:13.188837  316407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:14:13.209840  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:14:13.222137  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:14:13.234133  316407 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:14:13.234215  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:14:13.246797  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:14:13.258358  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:14:13.270014  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:14:13.281450  316407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:14:13.293142  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:14:13.304862  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:14:13.316724  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:14:13.330869  316407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:14:13.341489  316407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:14:13.351421  316407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:14:13.422220  316407 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:14:13.547300  316407 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 23:14:13.547384  316407 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 23:14:13.552420  316407 start.go:563] Will wait 60s for crictl version
	I0919 23:14:13.552487  316407 ssh_runner.go:195] Run: which crictl
	I0919 23:14:13.556401  316407 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:14:13.599948  316407 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 23:14:13.600013  316407 ssh_runner.go:195] Run: containerd --version
	I0919 23:14:13.628047  316407 ssh_runner.go:195] Run: containerd --version
	I0919 23:14:13.663378  316407 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 23:14:13.664991  316407 cli_runner.go:164] Run: docker network inspect newest-cni-312465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:14:13.686205  316407 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0919 23:14:13.690770  316407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:14:13.706243  316407 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0919 23:14:13.227771  314456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/client.crt ...
	I0919 23:14:13.227800  314456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/client.crt: {Name:mk0b93185f911e1ed22da3a7e83b7e4a3b4656c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:13.228000  314456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/client.key ...
	I0919 23:14:13.228016  314456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/client.key: {Name:mk5d8a10d021e65e0ea2306996d9fbc7526dffd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:13.228126  314456 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.key.32a1d63c
	I0919 23:14:13.228143  314456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.crt.32a1d63c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0919 23:14:13.333956  314456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.crt.32a1d63c ...
	I0919 23:14:13.333977  314456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.crt.32a1d63c: {Name:mk9a873519aee347cbf22b74bd2d38bc94810c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:13.334191  314456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.key.32a1d63c ...
	I0919 23:14:13.334218  314456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.key.32a1d63c: {Name:mk13d46ff878edeb75b4823e34041b570050680a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:13.334332  314456 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.crt.32a1d63c -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.crt
	I0919 23:14:13.334456  314456 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.key.32a1d63c -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.key
	I0919 23:14:13.334547  314456 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/proxy-client.key
	I0919 23:14:13.334570  314456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/proxy-client.crt with IP's: []
	I0919 23:14:13.453769  314456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/proxy-client.crt ...
	I0919 23:14:13.453805  314456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/proxy-client.crt: {Name:mk18c10847de2c71aad6fd8c8f7c1ebac841e89d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:13.453991  314456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/proxy-client.key ...
	I0919 23:14:13.454007  314456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/proxy-client.key: {Name:mk9b7f5e8c4bb8366e641a0e2b1f8e73849fb5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:13.454279  314456 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 23:14:13.454317  314456 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 23:14:13.454323  314456 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 23:14:13.454353  314456 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:14:13.454394  314456 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:14:13.454423  314456 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 23:14:13.454477  314456 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:14:13.455082  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:14:13.484535  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:14:13.517304  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:14:13.548219  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:14:13.577583  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0919 23:14:13.610100  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:14:13.641092  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:14:13.672332  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:14:13.702813  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 23:14:13.737270  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 23:14:13.767056  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:14:13.799273  314456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:14:13.821884  314456 ssh_runner.go:195] Run: openssl version
	I0919 23:14:13.829600  314456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 23:14:13.842064  314456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 23:14:13.846954  314456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 23:14:13.847023  314456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 23:14:13.854977  314456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:14:13.865906  314456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:14:13.877609  314456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:13.881873  314456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:13.881937  314456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:13.889326  314456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:14:13.901422  314456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 23:14:13.912006  314456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 23:14:13.916442  314456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 23:14:13.916530  314456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 23:14:13.923959  314456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 23:14:13.935586  314456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:14:13.940241  314456 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:14:13.940319  314456 kubeadm.go:392] StartCluster: {Name:auto-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:auto-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:14:13.940396  314456 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 23:14:13.940469  314456 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:14:13.985350  314456 cri.go:89] found id: ""
	I0919 23:14:13.985427  314456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:14:13.995885  314456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:14:14.007969  314456 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:14:14.008036  314456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:14:14.019642  314456 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:14:14.019663  314456 kubeadm.go:157] found existing configuration files:
	
	I0919 23:14:14.019710  314456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:14:14.030726  314456 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:14:14.030782  314456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:14:14.041637  314456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:14:14.053695  314456 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:14:14.053764  314456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:14:14.064445  314456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:14:14.075173  314456 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:14:14.075236  314456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:14:14.087504  314456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:14:14.099013  314456 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:14:14.099077  314456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:14:14.112720  314456 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:14:14.169062  314456 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:14:14.169138  314456 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:14:14.189713  314456 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:14:14.189803  314456 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:14:14.189894  314456 kubeadm.go:310] OS: Linux
	I0919 23:14:14.189972  314456 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:14:14.190087  314456 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:14:14.190193  314456 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:14:14.190265  314456 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:14:14.190344  314456 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:14:14.190423  314456 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:14:14.190492  314456 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:14:14.190547  314456 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:14:14.271081  314456 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:14:14.271250  314456 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:14:14.271370  314456 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:14:14.277841  314456 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:14:13.707636  316407 kubeadm.go:875] updating cluster {Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:14:13.707821  316407 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:14:13.707910  316407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:14:13.746098  316407 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:14:13.746124  316407 containerd.go:534] Images already preloaded, skipping extraction
	I0919 23:14:13.746189  316407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:14:13.783844  316407 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:14:13.783878  316407 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:14:13.783892  316407 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 containerd true true} ...
	I0919 23:14:13.784029  316407 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-312465 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:14:13.784105  316407 ssh_runner.go:195] Run: sudo crictl info
	I0919 23:14:13.825633  316407 cni.go:84] Creating CNI manager for ""
	I0919 23:14:13.825659  316407 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:14:13.825671  316407 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0919 23:14:13.825695  316407 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-312465 NodeName:newest-cni-312465 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:14:13.825851  316407 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-312465"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:14:13.825918  316407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:14:13.837794  316407 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:14:13.837887  316407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:14:13.849348  316407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0919 23:14:13.870115  316407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:14:13.890641  316407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I0919 23:14:13.911598  316407 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:14:13.915811  316407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:14:13.929529  316407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:14:14.000787  316407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:14:14.028231  316407 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465 for IP: 192.168.94.2
	I0919 23:14:14.028254  316407 certs.go:194] generating shared ca certs ...
	I0919 23:14:14.028275  316407 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:14.028432  316407 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 23:14:14.028491  316407 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 23:14:14.028507  316407 certs.go:256] generating profile certs ...
	I0919 23:14:14.028614  316407 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.key
	I0919 23:14:14.028693  316407 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb
	I0919 23:14:14.028734  316407 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key
	I0919 23:14:14.028833  316407 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 23:14:14.028868  316407 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 23:14:14.028877  316407 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 23:14:14.028899  316407 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:14:14.028920  316407 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:14:14.028944  316407 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 23:14:14.028982  316407 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:14:14.029670  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:14:14.060556  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:14:14.091021  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:14:14.125694  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:14:14.162025  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 23:14:14.193400  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:14:14.225636  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:14:14.256444  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:14:14.289567  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:14:14.318029  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 23:14:14.346226  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 23:14:14.376307  316407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:14:14.396487  316407 ssh_runner.go:195] Run: openssl version
	I0919 23:14:14.402520  316407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:14:14.415243  316407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:14.419577  316407 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:14.419641  316407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:14.427202  316407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:14:14.437468  316407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 23:14:14.448249  316407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 23:14:14.452004  316407 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 23:14:14.452063  316407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 23:14:14.459481  316407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 23:14:14.470309  316407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 23:14:14.481537  316407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 23:14:14.485192  316407 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 23:14:14.485248  316407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 23:14:14.492363  316407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:14:14.502027  316407 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:14:14.505956  316407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 23:14:14.512880  316407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 23:14:14.522652  316407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 23:14:14.530950  316407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 23:14:14.538030  316407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 23:14:14.545349  316407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 23:14:14.552550  316407 kubeadm.go:392] StartCluster: {Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:14:14.552666  316407 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 23:14:14.552715  316407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:14:14.605168  316407 cri.go:89] found id: "3bb98db115b6ad13cceece8b521436100bb04d0ceb273c75d323e94ef7440804"
	I0919 23:14:14.605195  316407 cri.go:89] found id: "03fb64d8cae80bb3a6cbd4e75fb9b8bed32c133d882bac12b3e69b1d615553f9"
	I0919 23:14:14.605201  316407 cri.go:89] found id: "aa4f1d7ae4be8607dc91cdece6dc505e811e83bc72a4d7ac0cf5dbb0e3120d87"
	I0919 23:14:14.605206  316407 cri.go:89] found id: "901a24762656849ac73b160ebe4d6031cc41bae30508e7e9b204baf440837dc2"
	I0919 23:14:14.605210  316407 cri.go:89] found id: "02f3965879829c98ed424d224c8a4ecc467b95a2b385c7eb4440639f1bccf628"
	I0919 23:14:14.605214  316407 cri.go:89] found id: "529122f97b267c7d2c20849ccbcc739630ced21969d0da2315cc2bb32dc0c09e"
	I0919 23:14:14.605218  316407 cri.go:89] found id: "879c323689e20cb30fefa0341fc12a9b42debf5a0380f2c22c16c23aefb17b5e"
	I0919 23:14:14.605222  316407 cri.go:89] found id: ""
	I0919 23:14:14.605282  316407 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0919 23:14:14.626731  316407 cri.go:116] JSON = null
	W0919 23:14:14.626776  316407 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I0919 23:14:14.626824  316407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:14:14.643440  316407 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 23:14:14.643461  316407 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 23:14:14.643521  316407 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 23:14:14.659895  316407 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 23:14:14.660666  316407 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-312465" does not appear in /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:14:14.661078  316407 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14678/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-312465" cluster setting kubeconfig missing "newest-cni-312465" context setting]
	I0919 23:14:14.661838  316407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:14.663926  316407 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 23:14:14.680534  316407 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0919 23:14:14.680570  316407 kubeadm.go:593] duration metric: took 37.103648ms to restartPrimaryControlPlane
	I0919 23:14:14.680582  316407 kubeadm.go:394] duration metric: took 128.045835ms to StartCluster
	I0919 23:14:14.680601  316407 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:14.680657  316407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:14:14.681787  316407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:14.682098  316407 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:14:14.682443  316407 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:14:14.682542  316407 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-312465"
	I0919 23:14:14.682566  316407 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-312465"
	W0919 23:14:14.682578  316407 addons.go:247] addon storage-provisioner should already be in state true
	I0919 23:14:14.682605  316407 host.go:66] Checking if "newest-cni-312465" exists ...
	I0919 23:14:14.682653  316407 addons.go:69] Setting default-storageclass=true in profile "newest-cni-312465"
	I0919 23:14:14.682676  316407 addons.go:69] Setting dashboard=true in profile "newest-cni-312465"
	I0919 23:14:14.682692  316407 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-312465"
	I0919 23:14:14.682696  316407 addons.go:238] Setting addon dashboard=true in "newest-cni-312465"
	I0919 23:14:14.682693  316407 addons.go:69] Setting metrics-server=true in profile "newest-cni-312465"
	W0919 23:14:14.682706  316407 addons.go:247] addon dashboard should already be in state true
	I0919 23:14:14.682722  316407 addons.go:238] Setting addon metrics-server=true in "newest-cni-312465"
	W0919 23:14:14.682733  316407 addons.go:247] addon metrics-server should already be in state true
	I0919 23:14:14.682738  316407 host.go:66] Checking if "newest-cni-312465" exists ...
	I0919 23:14:14.682766  316407 host.go:66] Checking if "newest-cni-312465" exists ...
	I0919 23:14:14.683072  316407 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:14:14.683131  316407 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:14:14.683171  316407 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:14:14.683300  316407 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:14:14.683717  316407 config.go:182] Loaded profile config "newest-cni-312465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:14:14.689018  316407 out.go:179] * Verifying Kubernetes components...
	I0919 23:14:14.690673  316407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:14:14.727141  316407 addons.go:238] Setting addon default-storageclass=true in "newest-cni-312465"
	W0919 23:14:14.727330  316407 addons.go:247] addon default-storageclass should already be in state true
	I0919 23:14:14.727408  316407 host.go:66] Checking if "newest-cni-312465" exists ...
	I0919 23:14:14.728034  316407 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:14:14.729775  316407 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:14:14.732485  316407 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 23:14:14.732514  316407 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:14:14.732751  316407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:14:14.732604  316407 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0919 23:14:14.733040  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:14.734716  316407 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 23:14:14.734882  316407 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 23:14:14.735006  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:14.736607  316407 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0919 23:14:12.833967  316421 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/kindnet-896447/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:14:12.867976  316421 cli_runner.go:164] Run: docker container inspect kindnet-896447 --format={{.State.Status}}
	I0919 23:14:12.888929  316421 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:14:12.888954  316421 kic_runner.go:114] Args: [docker exec --privileged kindnet-896447 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:14:12.943921  316421 cli_runner.go:164] Run: docker container inspect kindnet-896447 --format={{.State.Status}}
	I0919 23:14:12.967055  316421 machine.go:93] provisionDockerMachine start ...
	I0919 23:14:12.967150  316421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-896447
	I0919 23:14:12.987883  316421 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:12.988244  316421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I0919 23:14:12.988261  316421 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:14:13.131477  316421 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-896447
	
	I0919 23:14:13.131514  316421 ubuntu.go:182] provisioning hostname "kindnet-896447"
	I0919 23:14:13.131585  316421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-896447
	I0919 23:14:13.155953  316421 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:13.156192  316421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I0919 23:14:13.156208  316421 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-896447 && echo "kindnet-896447" | sudo tee /etc/hostname
	I0919 23:14:13.311553  316421 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-896447
	
	I0919 23:14:13.311659  316421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-896447
	I0919 23:14:13.333633  316421 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:13.333942  316421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I0919 23:14:13.333971  316421 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-896447' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-896447/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-896447' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:14:13.477191  316421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:14:13.477226  316421 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 23:14:13.477247  316421 ubuntu.go:190] setting up certificates
	I0919 23:14:13.477259  316421 provision.go:84] configureAuth start
	I0919 23:14:13.477315  316421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-896447
	I0919 23:14:13.499643  316421 provision.go:143] copyHostCerts
	I0919 23:14:13.499712  316421 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 23:14:13.499733  316421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 23:14:13.499803  316421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 23:14:13.499926  316421 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 23:14:13.499939  316421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 23:14:13.499986  316421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 23:14:13.500082  316421 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 23:14:13.500093  316421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 23:14:13.500136  316421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 23:14:13.500229  316421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.kindnet-896447 san=[127.0.0.1 192.168.76.2 kindnet-896447 localhost minikube]
	I0919 23:14:13.774403  316421 provision.go:177] copyRemoteCerts
	I0919 23:14:13.774464  316421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:14:13.774504  316421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-896447
	I0919 23:14:13.797738  316421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/kindnet-896447/id_rsa Username:docker}
	I0919 23:14:13.898422  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0919 23:14:13.928934  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:14:13.960742  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 23:14:13.990831  316421 provision.go:87] duration metric: took 513.561321ms to configureAuth
	I0919 23:14:13.990856  316421 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:14:13.991020  316421 config.go:182] Loaded profile config "kindnet-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:14:13.991031  316421 machine.go:96] duration metric: took 1.023951266s to provisionDockerMachine
	I0919 23:14:13.991038  316421 client.go:171] duration metric: took 6.179978715s to LocalClient.Create
	I0919 23:14:13.991059  316421 start.go:167] duration metric: took 6.180048472s to libmachine.API.Create "kindnet-896447"
	I0919 23:14:13.991071  316421 start.go:293] postStartSetup for "kindnet-896447" (driver="docker")
	I0919 23:14:13.991082  316421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:14:13.991139  316421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:14:13.991208  316421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-896447
	I0919 23:14:14.014007  316421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/kindnet-896447/id_rsa Username:docker}
	I0919 23:14:14.125753  316421 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:14:14.131110  316421 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:14:14.131151  316421 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:14:14.131191  316421 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:14:14.131199  316421 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:14:14.131212  316421 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 23:14:14.131282  316421 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 23:14:14.131379  316421 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 23:14:14.131491  316421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:14:14.144534  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:14:14.179321  316421 start.go:296] duration metric: took 188.234585ms for postStartSetup
	I0919 23:14:14.179794  316421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-896447
	I0919 23:14:14.203994  316421 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/config.json ...
	I0919 23:14:14.204542  316421 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:14:14.204751  316421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-896447
	I0919 23:14:14.226585  316421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/kindnet-896447/id_rsa Username:docker}
	I0919 23:14:14.327556  316421 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:14:14.332879  316421 start.go:128] duration metric: took 6.527836552s to createHost
	I0919 23:14:14.332905  316421 start.go:83] releasing machines lock for "kindnet-896447", held for 6.528006955s
	I0919 23:14:14.332977  316421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-896447
	I0919 23:14:14.353625  316421 ssh_runner.go:195] Run: cat /version.json
	I0919 23:14:14.353683  316421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-896447
	I0919 23:14:14.353762  316421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:14:14.353842  316421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-896447
	I0919 23:14:14.376549  316421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/kindnet-896447/id_rsa Username:docker}
	I0919 23:14:14.376871  316421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/kindnet-896447/id_rsa Username:docker}
	I0919 23:14:14.472738  316421 ssh_runner.go:195] Run: systemctl --version
	I0919 23:14:14.558540  316421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:14:14.564342  316421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:14:14.611592  316421 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:14:14.611694  316421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:14:14.660148  316421 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:14:14.660209  316421 start.go:495] detecting cgroup driver to use...
	I0919 23:14:14.660246  316421 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:14:14.660303  316421 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 23:14:14.683996  316421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:14:14.702943  316421 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:14:14.703000  316421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:14:14.738567  316421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:14:14.783709  316421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:14:14.907036  316421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:14:15.043923  316421 docker.go:234] disabling docker service ...
	I0919 23:14:15.044142  316421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:14:15.084474  316421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:14:15.109803  316421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:14:15.224009  316421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:14:15.323364  316421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:14:15.344143  316421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:14:15.369854  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:14:15.386980  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:14:15.400363  316421 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:14:15.400507  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:14:15.416777  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:14:15.429203  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:14:15.441951  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:14:15.454030  316421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:14:15.466471  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:14:15.481055  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:14:15.496817  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:14:15.511775  316421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:14:15.525364  316421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:14:15.538710  316421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:14:15.646137  316421 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:14:15.781720  316421 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 23:14:15.781806  316421 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 23:14:15.787063  316421 start.go:563] Will wait 60s for crictl version
	I0919 23:14:15.787125  316421 ssh_runner.go:195] Run: which crictl
	I0919 23:14:15.792265  316421 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:14:15.853186  316421 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 23:14:15.853341  316421 ssh_runner.go:195] Run: containerd --version
	I0919 23:14:15.887640  316421 ssh_runner.go:195] Run: containerd --version
	I0919 23:14:15.920891  316421 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 23:14:14.738308  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0919 23:14:14.738329  316407 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0919 23:14:14.738394  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:14.764755  316407 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:14:14.764780  316407 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:14:14.764841  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:14.771303  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:14.777340  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:14.791575  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:14.805000  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:14.882242  316407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:14:14.910606  316407 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:14:14.910697  316407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:14:14.930741  316407 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 23:14:14.930767  316407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 23:14:14.935967  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0919 23:14:14.935993  316407 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0919 23:14:14.939640  316407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:14:14.957552  316407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:14:14.994458  316407 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 23:14:14.994489  316407 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 23:14:14.997621  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0919 23:14:14.997742  316407 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0919 23:14:15.034556  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0919 23:14:15.034585  316407 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0919 23:14:15.066255  316407 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:14:15.066302  316407 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 23:14:15.109246  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0919 23:14:15.109273  316407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0919 23:14:15.125277  316407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:14:15.134149  316407 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:14:15.134213  316407 retry.go:31] will retry after 132.780092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:14:15.134285  316407 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:14:15.134300  316407 retry.go:31] will retry after 191.23981ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:14:15.150244  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0919 23:14:15.150276  316407 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0919 23:14:15.188071  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0919 23:14:15.188099  316407 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0919 23:14:15.226118  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0919 23:14:15.226142  316407 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0919 23:14:15.247203  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0919 23:14:15.247229  316407 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0919 23:14:15.268080  316407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:14:15.280407  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:14:15.280444  316407 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0919 23:14:15.309418  316407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:14:15.326108  316407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:14:15.411042  316407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:14:15.922475  316421 cli_runner.go:164] Run: docker network inspect kindnet-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:14:15.947468  316421 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0919 23:14:15.952833  316421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:14:15.969789  316421 kubeadm.go:875] updating cluster {Name:kindnet-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:14:15.969918  316421 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:14:15.970002  316421 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:14:16.019109  316421 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:14:16.019137  316421 containerd.go:534] Images already preloaded, skipping extraction
	I0919 23:14:16.019207  316421 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:14:16.064817  316421 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:14:16.064846  316421 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:14:16.064858  316421 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 containerd true true} ...
	I0919 23:14:16.064959  316421 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-896447 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:kindnet-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0919 23:14:16.065020  316421 ssh_runner.go:195] Run: sudo crictl info
	I0919 23:14:16.110692  316421 cni.go:84] Creating CNI manager for "kindnet"
	I0919 23:14:16.110722  316421 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:14:16.110751  316421 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-896447 NodeName:kindnet-896447 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:14:16.110896  316421 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kindnet-896447"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:14:16.110970  316421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:14:16.123909  316421 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:14:16.123982  316421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:14:16.135873  316421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0919 23:14:16.164147  316421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:14:16.198671  316421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I0919 23:14:16.226892  316421 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:14:16.232105  316421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:14:16.247558  316421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:14:16.346452  316421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:14:16.372108  316421 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447 for IP: 192.168.76.2
	I0919 23:14:16.372145  316421 certs.go:194] generating shared ca certs ...
	I0919 23:14:16.372197  316421 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:16.372376  316421 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 23:14:16.372433  316421 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 23:14:16.372443  316421 certs.go:256] generating profile certs ...
	I0919 23:14:16.372521  316421 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/client.key
	I0919 23:14:16.372536  316421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/client.crt with IP's: []
	I0919 23:14:16.995330  316421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/client.crt ...
	I0919 23:14:16.995368  316421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/client.crt: {Name:mk756bd659ab6e6d285c45d5259ede088998c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:16.995567  316421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/client.key ...
	I0919 23:14:16.995584  316421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/client.key: {Name:mk22d198c19d0d72b60c3316938f47df91a6db20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:16.995678  316421 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.key.eefef399
	I0919 23:14:16.995701  316421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.crt.eefef399 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0919 23:14:14.279860  314456 out.go:252]   - Generating certificates and keys ...
	I0919 23:14:14.279973  314456 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:14:14.280059  314456 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:14:14.339607  314456 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:14:14.695648  314456 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:14:15.166427  314456 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:14:15.478284  314456 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:14:16.056979  314456 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:14:16.057145  314456 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-896447 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0919 23:14:16.613509  314456 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:14:16.613709  314456 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-896447 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0919 23:14:16.802042  314456 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:14:17.219182  314456 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:14:17.329623  314456 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:14:17.330259  314456 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:14:17.603770  314456 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:14:17.944879  316407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.819556771s)
	I0919 23:14:17.944912  316407 addons.go:479] Verifying addon metrics-server=true in "newest-cni-312465"
	I0919 23:14:17.944967  316407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.67685062s)
	I0919 23:14:17.945121  316407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.635654715s)
	I0919 23:14:17.945218  316407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.619066208s)
	I0919 23:14:17.945263  316407 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.534186003s)
	I0919 23:14:17.945453  316407 api_server.go:72] duration metric: took 3.263316745s to wait for apiserver process to appear ...
	I0919 23:14:17.945465  316407 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:14:17.945486  316407 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:14:17.950311  316407 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-312465 addons enable metrics-server
	
	I0919 23:14:17.950677  316407 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:14:17.950701  316407 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:14:17.960944  316407 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0919 23:14:18.002769  314456 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:14:18.383355  314456 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:14:18.670472  314456 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:14:18.842477  314456 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:14:18.843208  314456 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:14:18.848514  314456 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:14:17.962534  316407 addons.go:514] duration metric: took 3.280097551s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0919 23:14:18.446348  316407 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:14:18.450811  316407 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:14:18.450836  316407 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:14:18.946338  316407 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:14:18.950671  316407 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:14:18.950695  316407 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:14:19.446401  316407 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:14:19.451960  316407 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0919 23:14:19.453481  316407 api_server.go:141] control plane version: v1.34.0
	I0919 23:14:19.453511  316407 api_server.go:131] duration metric: took 1.508037102s to wait for apiserver health ...
	I0919 23:14:19.453520  316407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:14:19.458758  316407 system_pods.go:59] 9 kube-system pods found
	I0919 23:14:19.458795  316407 system_pods.go:61] "coredns-66bc5c9577-xsnhs" [7a077a85-1f7c-4378-848b-a221d6e520ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:14:19.458803  316407 system_pods.go:61] "etcd-newest-cni-312465" [08794421-938c-46b5-bdf7-c0231507b4c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:14:19.458809  316407 system_pods.go:61] "kindnet-k9944" [ee352ec9-4e85-4bd1-9933-d4bf06151211] Running
	I0919 23:14:19.458816  316407 system_pods.go:61] "kube-apiserver-newest-cni-312465" [d2270076-85fd-4d05-8cc7-540ba3e8e250] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:14:19.458824  316407 system_pods.go:61] "kube-controller-manager-newest-cni-312465" [6e0dfe68-d986-4517-b42d-b3c9399cb136] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:14:19.458848  316407 system_pods.go:61] "kube-proxy-xmkv2" [9950d4ad-cc22-4962-88ac-47beba90840d] Running
	I0919 23:14:19.458856  316407 system_pods.go:61] "kube-scheduler-newest-cni-312465" [1b509b47-9469-4600-82ee-6e262fd24fef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:14:19.458872  316407 system_pods.go:61] "metrics-server-746fcd58dc-sbqxp" [924329ef-c721-4984-b923-8e92b0a66cd5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:14:19.458877  316407 system_pods.go:61] "storage-provisioner" [eff56a05-7e3a-4af0-9c37-7e4b4a5b6334] Running
	I0919 23:14:19.458890  316407 system_pods.go:74] duration metric: took 5.365089ms to wait for pod list to return data ...
	I0919 23:14:19.458900  316407 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:14:19.461999  316407 default_sa.go:45] found service account: "default"
	I0919 23:14:19.462030  316407 default_sa.go:55] duration metric: took 3.124934ms for default service account to be created ...
	I0919 23:14:19.462043  316407 kubeadm.go:578] duration metric: took 4.77991527s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0919 23:14:19.462066  316407 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:14:19.466335  316407 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 23:14:19.466385  316407 node_conditions.go:123] node cpu capacity is 8
	I0919 23:14:19.466402  316407 node_conditions.go:105] duration metric: took 4.330691ms to run NodePressure ...
	I0919 23:14:19.466416  316407 start.go:241] waiting for startup goroutines ...
	I0919 23:14:19.466433  316407 start.go:246] waiting for cluster config update ...
	I0919 23:14:19.466446  316407 start.go:255] writing updated cluster config ...
	I0919 23:14:19.466785  316407 ssh_runner.go:195] Run: rm -f paused
	I0919 23:14:19.520095  316407 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:14:19.522821  316407 out.go:179] * Done! kubectl is now configured to use "newest-cni-312465" cluster and "default" namespace by default
	I0919 23:14:17.916447  316421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.crt.eefef399 ...
	I0919 23:14:17.916502  316421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.crt.eefef399: {Name:mk72e58bfc8302aa7f218e1e79f36883826a855f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:17.916694  316421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.key.eefef399 ...
	I0919 23:14:17.916716  316421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.key.eefef399: {Name:mk3be3aa1c5fa005b492ed8c1826542cf3a64813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:17.916836  316421 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.crt.eefef399 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.crt
	I0919 23:14:17.916971  316421 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.key.eefef399 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.key
	I0919 23:14:17.917066  316421 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/proxy-client.key
	I0919 23:14:17.917093  316421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/proxy-client.crt with IP's: []
	I0919 23:14:18.315401  316421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/proxy-client.crt ...
	I0919 23:14:18.315433  316421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/proxy-client.crt: {Name:mka98fb2ee9fd556c68bcdb49fcf9c592f6611d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:18.315624  316421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/proxy-client.key ...
	I0919 23:14:18.315646  316421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/proxy-client.key: {Name:mkaecc9232a2f403f0979c61521a806d03de4a51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:18.315886  316421 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 23:14:18.315932  316421 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 23:14:18.315948  316421 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 23:14:18.315978  316421 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:14:18.316015  316421 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:14:18.316050  316421 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 23:14:18.316102  316421 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:14:18.316917  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:14:18.346101  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:14:18.375053  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:14:18.403450  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:14:18.431536  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 23:14:18.461372  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 23:14:18.491092  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:14:18.521880  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:14:18.549370  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 23:14:18.584079  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:14:18.614787  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 23:14:18.642628  316421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:14:18.663602  316421 ssh_runner.go:195] Run: openssl version
	I0919 23:14:18.669498  316421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 23:14:18.680793  316421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 23:14:18.685044  316421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 23:14:18.685124  316421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 23:14:18.693071  316421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:14:18.704026  316421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:14:18.714870  316421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:18.718981  316421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:18.719040  316421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:18.726457  316421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:14:18.737578  316421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 23:14:18.749196  316421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 23:14:18.753603  316421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 23:14:18.753693  316421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 23:14:18.763035  316421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 23:14:18.774430  316421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:14:18.778601  316421 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:14:18.778671  316421 kubeadm.go:392] StartCluster: {Name:kindnet-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DN
SDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:14:18.778773  316421 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 23:14:18.778834  316421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:14:18.819443  316421 cri.go:89] found id: ""
	I0919 23:14:18.819505  316421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:14:18.829703  316421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:14:18.841921  316421 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:14:18.841987  316421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:14:18.853980  316421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:14:18.854000  316421 kubeadm.go:157] found existing configuration files:
	
	I0919 23:14:18.854058  316421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:14:18.866028  316421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:14:18.866093  316421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:14:18.876375  316421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:14:18.886428  316421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:14:18.886493  316421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:14:18.897483  316421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:14:18.909787  316421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:14:18.909855  316421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:14:18.919563  316421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:14:18.929692  316421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:14:18.929751  316421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:14:18.939921  316421 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:14:19.006215  316421 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:14:19.065580  316421 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:14:18.853869  314456 out.go:252]   - Booting up control plane ...
	I0919 23:14:18.854010  314456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:14:18.854104  314456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:14:18.854221  314456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:14:18.868490  314456 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:14:18.868676  314456 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:14:18.875634  314456 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:14:18.876039  314456 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:14:18.876105  314456 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:14:18.956643  314456 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:14:18.956872  314456 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:14:19.459695  314456 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.757938ms
	I0919 23:14:19.463674  314456 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:14:19.463889  314456 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0919 23:14:19.464288  314456 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:14:19.464400  314456 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:14:21.546363  314456 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.083006654s
	I0919 23:14:22.533017  314456 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.069813223s
	I0919 23:14:19.946192  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:14:19.946232  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:14:19.946241  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:14:19.946254  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:14:19.946260  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:14:19.946265  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:14:19.946272  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:14:19.946278  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:14:19.946283  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:14:19.946301  294587 retry.go:31] will retry after 13.760304207s: missing components: kube-dns
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ff15da67a3dd2       409467f978b4a       6 seconds ago       Running             kindnet-cni               1                   aae1d5a52934e       kindnet-k9944
	84fd9d8b5e329       6e38f40d628db       6 seconds ago       Running             storage-provisioner       1                   7f471c7ae7612       storage-provisioner
	876a049bd2c09       df0860106674d       7 seconds ago       Running             kube-proxy                1                   5b4da5b4ca0dd       kube-proxy-xmkv2
	5d374a5f9fa61       46169d968e920       9 seconds ago       Running             kube-scheduler            1                   41f093272882b       kube-scheduler-newest-cni-312465
	330d509e7f38b       a0af72f2ec6d6       9 seconds ago       Running             kube-controller-manager   1                   88becf992f287       kube-controller-manager-newest-cni-312465
	40de363bc7b2f       90550c43ad2bc       9 seconds ago       Running             kube-apiserver            1                   f3cd2f7aac9a4       kube-apiserver-newest-cni-312465
	d166726518342       5f1f5298c888d       9 seconds ago       Running             etcd                      1                   e9df11abe0829       etcd-newest-cni-312465
	3bb98db115b6a       6e38f40d628db       21 seconds ago      Exited              storage-provisioner       0                   38b277a881dd9       storage-provisioner
	03fb64d8cae80       409467f978b4a       21 seconds ago      Exited              kindnet-cni               0                   3409b95e609ad       kindnet-k9944
	aa4f1d7ae4be8       df0860106674d       21 seconds ago      Exited              kube-proxy                0                   e562e80fea19d       kube-proxy-xmkv2
	901a247626568       5f1f5298c888d       33 seconds ago      Exited              etcd                      0                   edb938e8e5c29       etcd-newest-cni-312465
	02f3965879829       46169d968e920       33 seconds ago      Exited              kube-scheduler            0                   ff450beb2bbf3       kube-scheduler-newest-cni-312465
	529122f97b267       a0af72f2ec6d6       33 seconds ago      Exited              kube-controller-manager   0                   4d43b2e1857a3       kube-controller-manager-newest-cni-312465
	879c323689e20       90550c43ad2bc       33 seconds ago      Exited              kube-apiserver            0                   4ad178b48b93e       kube-apiserver-newest-cni-312465
	
	
	==> containerd <==
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.488977525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-k9944,Uid:ee352ec9-4e85-4bd1-9933-d4bf06151211,Namespace:kube-system,Attempt:1,}"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.497182817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xsnhs,Uid:7a077a85-1f7c-4378-848b-a221d6e520ff,Namespace:kube-system,Attempt:0,}"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.503624877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-746fcd58dc-sbqxp,Uid:924329ef-c721-4984-b923-8e92b0a66cd5,Namespace:kube-system,Attempt:0,}"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.507297227Z" level=info msg="StopPodSandbox for \"38b277a881dd91bca1d021ab6c910c3367f41b17aa5c7ea43f70265bf4c6f012\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.507359510Z" level=info msg="Container to stop \"3bb98db115b6ad13cceece8b521436100bb04d0ceb273c75d323e94ef7440804\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.507471325Z" level=info msg="TearDown network for sandbox \"38b277a881dd91bca1d021ab6c910c3367f41b17aa5c7ea43f70265bf4c6f012\" successfully"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.507485794Z" level=info msg="StopPodSandbox for \"38b277a881dd91bca1d021ab6c910c3367f41b17aa5c7ea43f70265bf4c6f012\" returns successfully"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.516132935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:eff56a05-7e3a-4af0-9c37-7e4b4a5b6334,Namespace:kube-system,Attempt:1,}"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.595659163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-746fcd58dc-sbqxp,Uid:924329ef-c721-4984-b923-8e92b0a66cd5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a0340f86a5d931cfd7b3b8e068c131613cdb373e49caa5aa71fdf57df7cc627\": failed to find network info for sandbox \"2a0340f86a5d931cfd7b3b8e068c131613cdb373e49caa5aa71fdf57df7cc627\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.596895856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xsnhs,Uid:7a077a85-1f7c-4378-848b-a221d6e520ff,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5935d5f1bcd9300bc8353efd02954304aafc6be6d0901cc51c2733b6f256201\": failed to find network info for sandbox \"a5935d5f1bcd9300bc8353efd02954304aafc6be6d0901cc51c2733b6f256201\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.616703717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xmkv2,Uid:9950d4ad-cc22-4962-88ac-47beba90840d,Namespace:kube-system,Attempt:1,} returns sandbox id \"5b4da5b4ca0dd4fda445fe75a7f52a7276b68d234902ff8cfa5275ffb63ce4a7\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.623591434Z" level=info msg="CreateContainer within sandbox \"5b4da5b4ca0dd4fda445fe75a7f52a7276b68d234902ff8cfa5275ffb63ce4a7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.638133937Z" level=info msg="CreateContainer within sandbox \"5b4da5b4ca0dd4fda445fe75a7f52a7276b68d234902ff8cfa5275ffb63ce4a7\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"876a049bd2c0944bdc54c0e0c28dacdc47d4c43e6f9debfb640fb0ae0ebc09e3\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.639304810Z" level=info msg="StartContainer for \"876a049bd2c0944bdc54c0e0c28dacdc47d4c43e6f9debfb640fb0ae0ebc09e3\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.708909652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:eff56a05-7e3a-4af0-9c37-7e4b4a5b6334,Namespace:kube-system,Attempt:1,} returns sandbox id \"7f471c7ae7612cfa00ea52b54ff36497fa8cc69aeceaca2f058b6194be5bd478\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.719615743Z" level=info msg="CreateContainer within sandbox \"7f471c7ae7612cfa00ea52b54ff36497fa8cc69aeceaca2f058b6194be5bd478\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.741673566Z" level=info msg="CreateContainer within sandbox \"7f471c7ae7612cfa00ea52b54ff36497fa8cc69aeceaca2f058b6194be5bd478\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"84fd9d8b5e329e51d64ecd20963e0a6601ce8ea362348c6113030b16cc394f0a\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.743876737Z" level=info msg="StartContainer for \"84fd9d8b5e329e51d64ecd20963e0a6601ce8ea362348c6113030b16cc394f0a\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.769769971Z" level=info msg="StartContainer for \"876a049bd2c0944bdc54c0e0c28dacdc47d4c43e6f9debfb640fb0ae0ebc09e3\" returns successfully"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.848931830Z" level=info msg="StartContainer for \"84fd9d8b5e329e51d64ecd20963e0a6601ce8ea362348c6113030b16cc394f0a\" returns successfully"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.887714735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-k9944,Uid:ee352ec9-4e85-4bd1-9933-d4bf06151211,Namespace:kube-system,Attempt:1,} returns sandbox id \"aae1d5a52934e3769ade3546999d71c8add1fa66d3f74096a3992aef396e6c6e\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.893645141Z" level=info msg="CreateContainer within sandbox \"aae1d5a52934e3769ade3546999d71c8add1fa66d3f74096a3992aef396e6c6e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.913489562Z" level=info msg="CreateContainer within sandbox \"aae1d5a52934e3769ade3546999d71c8add1fa66d3f74096a3992aef396e6c6e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"ff15da67a3dd2104eb9d299adf7cce30bf5157e1ed3c5ea757353118130340cc\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.914284930Z" level=info msg="StartContainer for \"ff15da67a3dd2104eb9d299adf7cce30bf5157e1ed3c5ea757353118130340cc\""
	Sep 19 23:14:18 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:18.085237016Z" level=info msg="StartContainer for \"ff15da67a3dd2104eb9d299adf7cce30bf5157e1ed3c5ea757353118130340cc\" returns successfully"
	
	
	==> describe nodes <==
	Name:               newest-cni-312465
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-312465
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=newest-cni-312465
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_13_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:13:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-312465
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:14:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:14:17 +0000   Fri, 19 Sep 2025 23:13:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:14:17 +0000   Fri, 19 Sep 2025 23:13:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:14:17 +0000   Fri, 19 Sep 2025 23:13:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:14:17 +0000   Fri, 19 Sep 2025 23:13:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-312465
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 c74d3be6add3408da233db9049d6523b
	  System UUID:                0f85fc34-fc6d-40d4-accc-1a556e194ee2
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-xsnhs                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     22s
	  kube-system                 etcd-newest-cni-312465                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-k9944                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-newest-cni-312465              250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-newest-cni-312465     200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-xmkv2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-newest-cni-312465              100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 metrics-server-746fcd58dc-sbqxp               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         20s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wpkq4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-c6hgc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21s   kube-proxy       
	  Normal  Starting                 6s    kube-proxy       
	  Normal  Starting                 28s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28s   kubelet          Node newest-cni-312465 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s   kubelet          Node newest-cni-312465 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s   kubelet          Node newest-cni-312465 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23s   node-controller  Node newest-cni-312465 event: Registered Node newest-cni-312465 in Controller
	  Normal  RegisteredNode           4s    node-controller  Node newest-cni-312465 event: Registered Node newest-cni-312465 in Controller
	  Normal  Starting                 2s    kubelet          Starting kubelet.
	  Normal  Starting                 1s    kubelet          Starting kubelet.
	  Normal  Starting                 0s    kubelet          Starting kubelet.
	
	
	==> dmesg <==
	[  +0.995983] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.504855] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500830] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.994952] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.505945] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501588] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.993278] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.507907] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500995] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.992251] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.509112] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501500] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989799] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.510643] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501653] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989383] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.511830] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500482] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989056] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.512088] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500241] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501649] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501291] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501422] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[Sep19 23:13] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	
	
	==> etcd [901a24762656849ac73b160ebe4d6031cc41bae30508e7e9b204baf440837dc2] <==
	{"level":"warn","ts":"2025-09-19T23:13:52.611673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.620654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.628338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.637058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.645037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.660031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.667997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.675896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.684637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.692625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.700709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.709416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.719558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.728829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.736466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.745589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.770298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.789669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.796406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.806579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.817747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.898374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42060","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T23:14:04.761663Z","caller":"traceutil/trace.go:172","msg":"trace[1275008077] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"143.202837ms","start":"2025-09-19T23:14:04.618439Z","end":"2025-09-19T23:14:04.761642Z","steps":["trace[1275008077] 'process raft request'  (duration: 63.366739ms)","trace[1275008077] 'compare'  (duration: 79.73343ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:14:04.896581Z","caller":"traceutil/trace.go:172","msg":"trace[1546239987] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"130.473535ms","start":"2025-09-19T23:14:04.766079Z","end":"2025-09-19T23:14:04.896552Z","steps":["trace[1546239987] 'process raft request'  (duration: 130.391568ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:14:04.896619Z","caller":"traceutil/trace.go:172","msg":"trace[1784497646] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"130.637142ms","start":"2025-09-19T23:14:04.765936Z","end":"2025-09-19T23:14:04.896573Z","steps":["trace[1784497646] 'process raft request'  (duration: 104.769724ms)","trace[1784497646] 'compare'  (duration: 25.620753ms)"],"step_count":2}
	
	
	==> etcd [d16672651834257c44a7b8bb09a2a96900893d4c44bf6bc2f77df309038082a3] <==
	{"level":"warn","ts":"2025-09-19T23:14:16.141970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.149711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.173294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.181874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.190554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.198738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.208363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.217317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.228731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.236617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.244347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.252923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.260019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.268636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.283526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.293896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.303819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.312817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.321723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.330749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.339876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.359652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.364882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.374731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.458009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36786","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:14:24 up  1:56,  0 users,  load average: 5.39, 4.10, 2.53
	Linux newest-cni-312465 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [03fb64d8cae80bb3a6cbd4e75fb9b8bed32c133d882bac12b3e69b1d615553f9] <==
	I0919 23:14:03.726806       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0919 23:14:03.727181       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0919 23:14:03.727503       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:14:03.727526       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:14:03.819369       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:14:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:14:04.119672       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:14:04.119853       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:14:04.119882       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:14:04.120205       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0919 23:14:04.620400       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:14:04.620428       1 metrics.go:72] Registering metrics
	I0919 23:14:04.620484       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kindnet [ff15da67a3dd2104eb9d299adf7cce30bf5157e1ed3c5ea757353118130340cc] <==
	I0919 23:14:18.358445       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0919 23:14:18.358781       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0919 23:14:18.358909       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:14:18.358931       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:14:18.358963       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:14:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:14:18.756607       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:14:18.756675       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:14:18.756698       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:14:18.757940       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0919 23:14:19.057688       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:14:19.057724       1 metrics.go:72] Registering metrics
	I0919 23:14:19.057813       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [40de363bc7b2f50d2bd03fca2ad5e8490b8a75e657c34924508263e639e1e39f] <==
	I0919 23:14:17.605715       1 controller.go:667] quota admission added evaluator for: namespaces
	I0919 23:14:17.704733       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 23:14:17.717719       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 23:14:17.792873       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.94.135"}
	I0919 23:14:17.816952       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.153.102"}
	I0919 23:14:17.969216       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 23:14:18.075084       1 handler_proxy.go:99] no RequestInfo found in the context
	W0919 23:14:18.075084       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:14:18.075134       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 23:14:18.075190       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0919 23:14:18.075238       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:14:18.076406       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 23:14:19.141553       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0919 23:14:20.630097       1 controller.go:667] quota admission added evaluator for: endpoints
	I0919 23:14:22.522612       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 23:14:22.525029       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	{"level":"warn","ts":"2025-09-19T23:14:23.664280Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0006014a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0919 23:14:23.664466       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:14:23.664493       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 23:14:23.664521       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.046µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0919 23:14:23.665898       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:14:23.666050       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.726966ms" method="POST" path="/api/v1/namespaces/default/events" result=null
	
	
	==> kube-apiserver [879c323689e20cb30fefa0341fc12a9b42debf5a0380f2c22c16c23aefb17b5e] <==
	I0919 23:13:56.355505       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0919 23:13:56.365521       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 23:14:02.067292       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 23:14:02.267485       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 23:14:02.369080       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 23:14:02.374977       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 23:14:04.361918       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 23:14:04.366861       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:14:04.366930       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0919 23:14:04.366994       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0919 23:14:04.762362       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.97.148.35"}
	W0919 23:14:04.899652       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:14:04.899829       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0919 23:14:04.902748       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	W0919 23:14:04.907664       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:14:04.907735       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-controller-manager [330d509e7f38bc22edbf12409609e434f6b3103c1619d9e3d23b87443e86e201] <==
	I0919 23:14:20.573099       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0919 23:14:20.573230       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 23:14:20.573419       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 23:14:20.573677       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 23:14:20.573700       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 23:14:20.575355       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 23:14:20.577460       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0919 23:14:20.578110       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 23:14:20.579482       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 23:14:20.579610       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 23:14:20.579788       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 23:14:20.585279       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 23:14:20.585512       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 23:14:20.585668       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 23:14:20.586025       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:14:20.607274       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:14:20.613764       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0919 23:14:20.617145       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 23:14:20.617364       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 23:14:20.617447       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-312465"
	I0919 23:14:20.617535       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 23:14:20.623489       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 23:14:20.623551       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0919 23:14:20.639065       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:14:22.405344       1 request.go:752] "Waited before sending request" delay="1.582482942s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://192.168.94.2:8443/api/v1/namespaces/kube-system/serviceaccounts/deployment-controller/token"
	
	
	==> kube-controller-manager [529122f97b267c7d2c20849ccbcc739630ced21969d0da2315cc2bb32dc0c09e] <==
	I0919 23:14:01.416297       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 23:14:01.416379       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-312465"
	I0919 23:14:01.416426       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 23:14:01.416484       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 23:14:01.417869       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 23:14:01.421010       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 23:14:01.421535       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 23:14:01.421630       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:14:01.421649       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0919 23:14:01.421673       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0919 23:14:01.421675       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 23:14:01.421685       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 23:14:01.421773       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 23:14:01.421973       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 23:14:01.422070       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 23:14:01.422097       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 23:14:01.423727       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0919 23:14:01.423846       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 23:14:01.423962       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 23:14:01.426612       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:14:01.428935       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 23:14:01.438991       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 23:14:01.445291       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:14:01.448514       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	E0919 23:14:04.399416       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-746fcd58dc\" failed with pods \"metrics-server-746fcd58dc-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [876a049bd2c0944bdc54c0e0c28dacdc47d4c43e6f9debfb640fb0ae0ebc09e3] <==
	I0919 23:14:17.821294       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:14:17.892060       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:14:17.992276       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:14:17.992321       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0919 23:14:17.992455       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:14:18.020801       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:14:18.020874       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:14:18.027755       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:14:18.028231       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:14:18.028270       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:14:18.029958       1 config.go:309] "Starting node config controller"
	I0919 23:14:18.029975       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:14:18.029983       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:14:18.030215       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:14:18.030283       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:14:18.030371       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:14:18.030321       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:14:18.030239       1 config.go:200] "Starting service config controller"
	I0919 23:14:18.030452       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:14:18.131363       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 23:14:18.131391       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 23:14:18.131374       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [aa4f1d7ae4be8607dc91cdece6dc505e811e83bc72a4d7ac0cf5dbb0e3120d87] <==
	I0919 23:14:03.307101       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:14:03.397896       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:14:03.499103       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:14:03.499145       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0919 23:14:03.499415       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:14:03.546823       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:14:03.546952       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:14:03.555595       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:14:03.559926       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:14:03.559980       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:14:03.562690       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:14:03.562714       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:14:03.563203       1 config.go:200] "Starting service config controller"
	I0919 23:14:03.563214       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:14:03.564543       1 config.go:309] "Starting node config controller"
	I0919 23:14:03.564559       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:14:03.564566       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:14:03.567766       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:14:03.567943       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:14:03.663436       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 23:14:03.663445       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 23:14:03.668915       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [02f3965879829c98ed424d224c8a4ecc467b95a2b385c7eb4440639f1bccf628] <==
	E0919 23:13:53.464021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 23:13:53.464841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 23:13:53.464844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 23:13:53.464914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 23:13:53.464698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 23:13:54.359266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 23:13:54.384937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 23:13:54.429091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 23:13:54.434372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 23:13:54.478498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 23:13:54.495097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 23:13:54.594486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 23:13:54.643484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 23:13:54.745929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 23:13:54.770074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 23:13:54.792464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 23:13:54.849171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 23:13:54.866839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0919 23:13:54.896416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 23:13:54.897327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 23:13:54.923395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 23:13:54.957107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 23:13:55.044298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 23:13:55.052558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I0919 23:13:57.457519       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [5d374a5f9fa61d8c19b81dd5eb8f4477bb61006527c2942d6afa885f27f4d80d] <==
	I0919 23:14:16.194618       1 serving.go:386] Generated self-signed cert in-memory
	I0919 23:14:17.061424       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 23:14:17.061468       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:14:17.069921       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0919 23:14:17.069969       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0919 23:14:17.070039       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:14:17.070068       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:14:17.070098       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 23:14:17.070112       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 23:14:17.074659       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 23:14:17.074775       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 23:14:17.171014       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0919 23:14:17.171197       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:14:17.172494       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.153257    2283 kubelet.go:475] "Attempting to sync node with API server"
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.153300    2283 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests"
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.153334    2283 kubelet.go:387] "Adding apiserver pod source"
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.153355    2283 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.155072    2283 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.7.27" apiVersion="v1"
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.155794    2283 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.155836    2283 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.156899    2283 server.go:1262] "Started kubelet"
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.157324    2283 server.go:180] "Starting to listen" address="0.0.0.0" port=10250
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.157509    2283 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.157615    2283 server_v1.go:49] "podresources" method="list" useActivePods=true
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.157941    2283 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.161753    2283 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.166579    2283 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.168889    2283 volume_manager.go:313] "Starting Kubelet Volume Manager"
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.169330    2283 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: E0919 23:14:25.169622    2283 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"newest-cni-312465\" not found"
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.171023    2283 reconciler.go:29] "Reconciler: start to sync state"
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.171995    2283 server.go:310] "Adding debug handlers to kubelet server"
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.176314    2283 factory.go:223] Registration of the systemd container factory successfully
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.176486    2283 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: I0919 23:14:25.179407    2283 factory.go:223] Registration of the containerd container factory successfully
	Sep 19 23:14:25 newest-cni-312465 kubelet[2283]: E0919 23:14:25.179475    2283 kubelet.go:1686] "Failed to start cAdvisor" err="inotify_init: too many open files"
	Sep 19 23:14:25 newest-cni-312465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Sep 19 23:14:25 newest-cni-312465 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	
	==> storage-provisioner [3bb98db115b6ad13cceece8b521436100bb04d0ceb273c75d323e94ef7440804] <==
	I0919 23:14:03.784231       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 23:14:03.796533       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 23:14:03.796587       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0919 23:14:03.800242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:14:03.808034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0919 23:14:03.808286       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 23:14:03.808672       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-312465_a3e2dca0-2fc8-4412-9a2e-1720b60169f2!
	I0919 23:14:03.809402       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"050de51a-8669-4e49-a7e4-ac16a3fefa25", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-312465_a3e2dca0-2fc8-4412-9a2e-1720b60169f2 became leader
	W0919 23:14:03.818056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:14:03.826974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0919 23:14:03.909772       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-312465_a3e2dca0-2fc8-4412-9a2e-1720b60169f2!
	
	
	==> storage-provisioner [84fd9d8b5e329e51d64ecd20963e0a6601ce8ea362348c6113030b16cc394f0a] <==
	I0919 23:14:17.865524       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-312465 -n newest-cni-312465
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-312465 -n newest-cni-312465: exit status 2 (431.154104ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-312465 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-xsnhs metrics-server-746fcd58dc-sbqxp dashboard-metrics-scraper-6ffb444bf9-wpkq4 kubernetes-dashboard-855c9754f9-c6hgc
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-312465 describe pod coredns-66bc5c9577-xsnhs metrics-server-746fcd58dc-sbqxp dashboard-metrics-scraper-6ffb444bf9-wpkq4 kubernetes-dashboard-855c9754f9-c6hgc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-312465 describe pod coredns-66bc5c9577-xsnhs metrics-server-746fcd58dc-sbqxp dashboard-metrics-scraper-6ffb444bf9-wpkq4 kubernetes-dashboard-855c9754f9-c6hgc: exit status 1 (85.567358ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-xsnhs" not found
	Error from server (NotFound): pods "metrics-server-746fcd58dc-sbqxp" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-wpkq4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-c6hgc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-312465 describe pod coredns-66bc5c9577-xsnhs metrics-server-746fcd58dc-sbqxp dashboard-metrics-scraper-6ffb444bf9-wpkq4 kubernetes-dashboard-855c9754f9-c6hgc: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-312465
helpers_test.go:243: (dbg) docker inspect newest-cni-312465:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2da8ead24bf634d8f6a729243d038d798fafa21ac7bf909ed9bba2f621fc3b69",
	        "Created": "2025-09-19T23:13:32.170200572Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 316984,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T23:14:07.83636793Z",
	            "FinishedAt": "2025-09-19T23:14:05.581387247Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/2da8ead24bf634d8f6a729243d038d798fafa21ac7bf909ed9bba2f621fc3b69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2da8ead24bf634d8f6a729243d038d798fafa21ac7bf909ed9bba2f621fc3b69/hostname",
	        "HostsPath": "/var/lib/docker/containers/2da8ead24bf634d8f6a729243d038d798fafa21ac7bf909ed9bba2f621fc3b69/hosts",
	        "LogPath": "/var/lib/docker/containers/2da8ead24bf634d8f6a729243d038d798fafa21ac7bf909ed9bba2f621fc3b69/2da8ead24bf634d8f6a729243d038d798fafa21ac7bf909ed9bba2f621fc3b69-json.log",
	        "Name": "/newest-cni-312465",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-312465:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-312465",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2da8ead24bf634d8f6a729243d038d798fafa21ac7bf909ed9bba2f621fc3b69",
	                "LowerDir": "/var/lib/docker/overlay2/6d62e2f859b958fea90eb7b70c32d05ee58d2721ea59c3f0af602c2e8c0e8707-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6d62e2f859b958fea90eb7b70c32d05ee58d2721ea59c3f0af602c2e8c0e8707/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6d62e2f859b958fea90eb7b70c32d05ee58d2721ea59c3f0af602c2e8c0e8707/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6d62e2f859b958fea90eb7b70c32d05ee58d2721ea59c3f0af602c2e8c0e8707/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-312465",
	                "Source": "/var/lib/docker/volumes/newest-cni-312465/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-312465",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-312465",
	                "name.minikube.sigs.k8s.io": "newest-cni-312465",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9ab14afe362fa4be0457cf5b1c00525ca82f33eaae541a71efa68e2c4f58cbe8",
	            "SandboxKey": "/var/run/docker/netns/9ab14afe362f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-312465": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:e3:1c:fd:f5:ae",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ecb131f9ec1b2c372dfdf9b0ed72aaad0b8b0fc77db2fbc20949c0f4dfc0485e",
	                    "EndpointID": "294103dc868b470b92ce317bb9d64dd2a19fe0b311453063a6f9f2da355e591c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-312465",
	                        "2da8ead24bf6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-312465 -n newest-cni-312465
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-312465 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-312465 logs -n 25: (1.921623798s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ addons  │ enable dashboard -p embed-certs-403962 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-403962        │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:12 UTC │
	│ start   │ -p embed-certs-403962 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-403962        │ jenkins │ v1.37.0 │ 19 Sep 25 23:12 UTC │ 19 Sep 25 23:13 UTC │
	│ start   │ -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-430859 │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-430859 │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ delete  │ -p kubernetes-upgrade-430859                                                                                                                                                                                                                        │ kubernetes-upgrade-430859 │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ start   │ -p newest-cni-312465 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-312465         │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:14 UTC │
	│ image   │ no-preload-364197 image list --format=json                                                                                                                                                                                                          │ no-preload-364197         │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ pause   │ -p no-preload-364197 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-364197         │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ unpause │ -p no-preload-364197 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-364197         │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ image   │ embed-certs-403962 image list --format=json                                                                                                                                                                                                         │ embed-certs-403962        │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ pause   │ -p embed-certs-403962 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-403962        │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ unpause │ -p embed-certs-403962 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-403962        │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ delete  │ -p no-preload-364197                                                                                                                                                                                                                                │ no-preload-364197         │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:14 UTC │
	│ delete  │ -p no-preload-364197                                                                                                                                                                                                                                │ no-preload-364197         │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ start   │ -p auto-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │                     │
	│ delete  │ -p embed-certs-403962                                                                                                                                                                                                                               │ embed-certs-403962        │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-312465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-312465         │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ stop    │ -p newest-cni-312465 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-312465         │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ delete  │ -p embed-certs-403962                                                                                                                                                                                                                               │ embed-certs-403962        │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ addons  │ enable dashboard -p newest-cni-312465 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-312465         │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ start   │ -p newest-cni-312465 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-312465         │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ start   │ -p kindnet-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd                                                                                                      │ kindnet-896447            │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │                     │
	│ image   │ newest-cni-312465 image list --format=json                                                                                                                                                                                                          │ newest-cni-312465         │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ pause   │ -p newest-cni-312465 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-312465         │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	│ unpause │ -p newest-cni-312465 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-312465         │ jenkins │ v1.37.0 │ 19 Sep 25 23:14 UTC │ 19 Sep 25 23:14 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:14:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:14:07.545450  316421 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:14:07.545589  316421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:14:07.545601  316421 out.go:374] Setting ErrFile to fd 2...
	I0919 23:14:07.545607  316421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:14:07.545908  316421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 23:14:07.546484  316421 out.go:368] Setting JSON to false
	I0919 23:14:07.547671  316421 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6992,"bootTime":1758316656,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:14:07.547785  316421 start.go:140] virtualization: kvm guest
	I0919 23:14:07.549879  316421 out.go:179] * [kindnet-896447] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:14:07.552990  316421 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:14:07.553022  316421 notify.go:220] Checking for updates...
	I0919 23:14:07.559610  316421 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:14:07.561382  316421 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:14:07.566189  316421 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 23:14:07.568116  316421 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:14:07.570024  316421 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:14:07.546807  316407 config.go:182] Loaded profile config "newest-cni-312465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:14:07.547363  316407 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:14:07.577693  316407 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:14:07.577797  316407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:14:07.648780  316407 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:68 SystemTime:2025-09-19 23:14:07.636150467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:14:07.648963  316407 docker.go:318] overlay module found
	I0919 23:14:07.652148  316407 out.go:179] * Using the docker driver based on existing profile
	I0919 23:14:07.572300  316421 config.go:182] Loaded profile config "auto-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:14:07.572495  316421 config.go:182] Loaded profile config "default-k8s-diff-port-149888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:14:07.572664  316421 config.go:182] Loaded profile config "newest-cni-312465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:14:07.572820  316421 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:14:07.600815  316421 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:14:07.600925  316421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:14:07.680865  316421 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:68 SystemTime:2025-09-19 23:14:07.665771969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:14:07.681124  316421 docker.go:318] overlay module found
	I0919 23:14:07.688792  316421 out.go:179] * Using the docker driver based on user configuration
	I0919 23:14:07.655140  316407 start.go:304] selected driver: docker
	I0919 23:14:07.655198  316407 start.go:918] validating driver "docker" against &{Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:14:07.655339  316407 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:14:07.655999  316407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:14:07.737538  316407 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:68 SystemTime:2025-09-19 23:14:07.72575158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:14:07.737821  316407 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0919 23:14:07.737843  316407 cni.go:84] Creating CNI manager for ""
	I0919 23:14:07.737895  316407 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:14:07.737941  316407 start.go:348] cluster config:
	{Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:14:07.741137  316407 out.go:179] * Starting "newest-cni-312465" primary control-plane node in "newest-cni-312465" cluster
	I0919 23:14:07.742815  316407 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 23:14:07.744122  316407 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:14:07.689907  316421 start.go:304] selected driver: docker
	I0919 23:14:07.689929  316421 start.go:918] validating driver "docker" against <nil>
	I0919 23:14:07.689954  316421 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:14:07.690652  316421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:14:07.767818  316421 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:false NGoroutines:80 SystemTime:2025-09-19 23:14:07.754934801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:14:07.768061  316421 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 23:14:07.768365  316421 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:14:07.770670  316421 out.go:179] * Using Docker driver with root privileges
	I0919 23:14:07.771969  316421 cni.go:84] Creating CNI manager for "kindnet"
	I0919 23:14:07.771993  316421 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 23:14:07.772099  316421 start.go:348] cluster config:
	{Name:kindnet-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I0919 23:14:07.774357  316421 out.go:179] * Starting "kindnet-896447" primary control-plane node in "kindnet-896447" cluster
	I0919 23:14:07.775668  316421 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 23:14:07.776953  316421 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:14:07.745565  316407 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:14:07.745615  316407 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 23:14:07.745624  316407 cache.go:58] Caching tarball of preloaded images
	I0919 23:14:07.745696  316407 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:14:07.745713  316407 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 23:14:07.745724  316407 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 23:14:07.745887  316407 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/config.json ...
	I0919 23:14:07.773705  316407 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:14:07.773726  316407 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:14:07.773767  316407 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:14:07.773797  316407 start.go:360] acquireMachinesLock for newest-cni-312465: {Name:mkdaed0f91b48ccb0806887f4c48e7b6207e9286 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:14:07.773868  316407 start.go:364] duration metric: took 45.525µs to acquireMachinesLock for "newest-cni-312465"
	I0919 23:14:07.773892  316407 start.go:96] Skipping create...Using existing machine configuration
	I0919 23:14:07.773898  316407 fix.go:54] fixHost starting: 
	I0919 23:14:07.774109  316407 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:14:07.796796  316407 fix.go:112] recreateIfNeeded on newest-cni-312465: state=Stopped err=<nil>
	W0919 23:14:07.796850  316407 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 23:14:07.778192  316421 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:14:07.778230  316421 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:14:07.778236  316421 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 23:14:07.778280  316421 cache.go:58] Caching tarball of preloaded images
	I0919 23:14:07.778375  316421 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 23:14:07.778387  316421 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 23:14:07.778510  316421 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/config.json ...
	I0919 23:14:07.778540  316421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/config.json: {Name:mkfc753d97a896ef89666bc40d14195b2cd88207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:07.804596  316421 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:14:07.804618  316421 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:14:07.804637  316421 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:14:07.804746  316421 start.go:360] acquireMachinesLock for kindnet-896447: {Name:mke345f56beddc08f221f0e34bb3ed88e95b38fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:14:07.804878  316421 start.go:364] duration metric: took 107.11µs to acquireMachinesLock for "kindnet-896447"
	I0919 23:14:07.804913  316421 start.go:93] Provisioning new machine with config: &{Name:kindnet-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:14:07.805025  316421 start.go:125] createHost starting for "" (driver="docker")
	I0919 23:14:03.138342  314456 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 23:14:03.138581  314456 start.go:159] libmachine.API.Create for "auto-896447" (driver="docker")
	I0919 23:14:03.138611  314456 client.go:168] LocalClient.Create starting
	I0919 23:14:03.138723  314456 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 23:14:03.138757  314456 main.go:141] libmachine: Decoding PEM data...
	I0919 23:14:03.138767  314456 main.go:141] libmachine: Parsing certificate...
	I0919 23:14:03.138818  314456 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 23:14:03.138833  314456 main.go:141] libmachine: Decoding PEM data...
	I0919 23:14:03.138841  314456 main.go:141] libmachine: Parsing certificate...
	I0919 23:14:03.139221  314456 cli_runner.go:164] Run: docker network inspect auto-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 23:14:03.163836  314456 cli_runner.go:211] docker network inspect auto-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 23:14:03.163916  314456 network_create.go:284] running [docker network inspect auto-896447] to gather additional debugging logs...
	I0919 23:14:03.163947  314456 cli_runner.go:164] Run: docker network inspect auto-896447
	W0919 23:14:03.187238  314456 cli_runner.go:211] docker network inspect auto-896447 returned with exit code 1
	I0919 23:14:03.187271  314456 network_create.go:287] error running [docker network inspect auto-896447]: docker network inspect auto-896447: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-896447 not found
	I0919 23:14:03.187284  314456 network_create.go:289] output of [docker network inspect auto-896447]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-896447 not found
	
	** /stderr **
	I0919 23:14:03.187380  314456 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:14:03.225659  314456 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-465af21e2d8d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:b5:e0:20:10:48} reservation:<nil>}
	I0919 23:14:03.226784  314456 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd9dd989b948 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:ff:32:51:a1:28} reservation:<nil>}
	I0919 23:14:03.227938  314456 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d5cd7c460be9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:d8:0f:d3:49:b9} reservation:<nil>}
	I0919 23:14:03.229281  314456 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-eeb244b5b4d9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:19:45:7a:f8:43} reservation:<nil>}
	I0919 23:14:03.230695  314456 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000491960}
	I0919 23:14:03.230725  314456 network_create.go:124] attempt to create docker network auto-896447 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0919 23:14:03.230780  314456 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-896447 auto-896447
	I0919 23:14:03.312517  314456 network_create.go:108] docker network auto-896447 192.168.85.0/24 created
	I0919 23:14:03.312557  314456 kic.go:121] calculated static IP "192.168.85.2" for the "auto-896447" container
	I0919 23:14:03.312645  314456 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 23:14:03.338796  314456 cli_runner.go:164] Run: docker volume create auto-896447 --label name.minikube.sigs.k8s.io=auto-896447 --label created_by.minikube.sigs.k8s.io=true
	I0919 23:14:03.379021  314456 oci.go:103] Successfully created a docker volume auto-896447
	I0919 23:14:03.379332  314456 cli_runner.go:164] Run: docker run --rm --name auto-896447-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-896447 --entrypoint /usr/bin/test -v auto-896447:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 23:14:03.920792  314456 oci.go:107] Successfully prepared a docker volume auto-896447
	I0919 23:14:03.920827  314456 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:14:03.920851  314456 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 23:14:03.920918  314456 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-896447:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 23:14:07.233839  314456 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-896447:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.312873654s)
	I0919 23:14:07.233870  314456 kic.go:203] duration metric: took 3.313016518s to extract preloaded images to volume ...
	W0919 23:14:07.233954  314456 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 23:14:07.233982  314456 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 23:14:07.234020  314456 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 23:14:07.307662  314456 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-896447 --name auto-896447 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-896447 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-896447 --network auto-896447 --ip 192.168.85.2 --volume auto-896447:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 23:14:07.640242  314456 cli_runner.go:164] Run: docker container inspect auto-896447 --format={{.State.Running}}
	I0919 23:14:07.667399  314456 cli_runner.go:164] Run: docker container inspect auto-896447 --format={{.State.Status}}
	I0919 23:14:07.694975  314456 cli_runner.go:164] Run: docker exec auto-896447 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:14:07.765646  314456 oci.go:144] the created container "auto-896447" has a running status.
	I0919 23:14:07.765682  314456 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/auto-896447/id_rsa...
	I0919 23:14:05.235334  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:14:05.235366  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:14:05.235374  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:14:05.235383  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:14:05.235391  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:14:05.235397  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:14:05.235405  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:14:05.235410  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:14:05.235416  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:14:05.235443  294587 retry.go:31] will retry after 6.715487454s: missing components: kube-dns
	I0919 23:14:07.871215  314456 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/auto-896447/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:14:07.911234  314456 cli_runner.go:164] Run: docker container inspect auto-896447 --format={{.State.Status}}
	I0919 23:14:07.948482  314456 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:14:07.948507  314456 kic_runner.go:114] Args: [docker exec --privileged auto-896447 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:14:08.029197  314456 cli_runner.go:164] Run: docker container inspect auto-896447 --format={{.State.Status}}
	I0919 23:14:08.055498  314456 machine.go:93] provisionDockerMachine start ...
	I0919 23:14:08.055605  314456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-896447
	I0919 23:14:08.088840  314456 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:08.089645  314456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I0919 23:14:08.089689  314456 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:14:08.239460  314456 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-896447
	
	I0919 23:14:08.239490  314456 ubuntu.go:182] provisioning hostname "auto-896447"
	I0919 23:14:08.239558  314456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-896447
	I0919 23:14:08.266255  314456 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:08.266566  314456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I0919 23:14:08.266593  314456 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-896447 && echo "auto-896447" | sudo tee /etc/hostname
	I0919 23:14:08.435368  314456 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-896447
	
	I0919 23:14:08.435449  314456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-896447
	I0919 23:14:08.468455  314456 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:08.468769  314456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I0919 23:14:08.469004  314456 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-896447' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-896447/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-896447' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:14:08.631462  314456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:14:08.631506  314456 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 23:14:08.631533  314456 ubuntu.go:190] setting up certificates
	I0919 23:14:08.631546  314456 provision.go:84] configureAuth start
	I0919 23:14:08.631611  314456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-896447
	I0919 23:14:08.653148  314456 provision.go:143] copyHostCerts
	I0919 23:14:08.653239  314456 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 23:14:08.653256  314456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 23:14:08.653351  314456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 23:14:08.653474  314456 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 23:14:08.653487  314456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 23:14:08.653529  314456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 23:14:08.653611  314456 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 23:14:08.653624  314456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 23:14:08.653664  314456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 23:14:08.653748  314456 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.auto-896447 san=[127.0.0.1 192.168.85.2 auto-896447 localhost minikube]
	I0919 23:14:09.278100  314456 provision.go:177] copyRemoteCerts
	I0919 23:14:09.278176  314456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:14:09.278231  314456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-896447
	I0919 23:14:09.301359  314456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/auto-896447/id_rsa Username:docker}
	I0919 23:14:09.403022  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 23:14:09.434598  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0919 23:14:09.462442  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 23:14:09.491772  314456 provision.go:87] duration metric: took 860.211434ms to configureAuth
	I0919 23:14:09.491798  314456 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:14:09.491935  314456 config.go:182] Loaded profile config "auto-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:14:09.491946  314456 machine.go:96] duration metric: took 1.436426219s to provisionDockerMachine
	I0919 23:14:09.491952  314456 client.go:171] duration metric: took 6.353335489s to LocalClient.Create
	I0919 23:14:09.491969  314456 start.go:167] duration metric: took 6.35338915s to libmachine.API.Create "auto-896447"
	I0919 23:14:09.491978  314456 start.go:293] postStartSetup for "auto-896447" (driver="docker")
	I0919 23:14:09.491985  314456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:14:09.492030  314456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:14:09.492068  314456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-896447
	I0919 23:14:09.512712  314456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/auto-896447/id_rsa Username:docker}
	I0919 23:14:09.655581  314456 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:14:09.660004  314456 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:14:09.660046  314456 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:14:09.660058  314456 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:14:09.660067  314456 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:14:09.660080  314456 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 23:14:09.660170  314456 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 23:14:09.660277  314456 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 23:14:09.660445  314456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:14:09.672252  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:14:09.761837  314456 start.go:296] duration metric: took 269.845026ms for postStartSetup
	I0919 23:14:09.817334  314456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-896447
	I0919 23:14:09.840461  314456 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/config.json ...
	I0919 23:14:09.881702  314456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:14:09.881770  314456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-896447
	I0919 23:14:09.903252  314456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/auto-896447/id_rsa Username:docker}
	I0919 23:14:09.998022  314456 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:14:10.004074  314456 start.go:128] duration metric: took 6.869525965s to createHost
	I0919 23:14:10.004113  314456 start.go:83] releasing machines lock for "auto-896447", held for 6.869689556s
	I0919 23:14:10.004324  314456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-896447
	I0919 23:14:10.032708  314456 ssh_runner.go:195] Run: cat /version.json
	I0919 23:14:10.032764  314456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:14:10.032767  314456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-896447
	I0919 23:14:10.032841  314456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-896447
	I0919 23:14:10.057994  314456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/auto-896447/id_rsa Username:docker}
	I0919 23:14:10.058468  314456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/auto-896447/id_rsa Username:docker}
	I0919 23:14:10.247200  314456 ssh_runner.go:195] Run: systemctl --version
	I0919 23:14:10.253261  314456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:14:10.259353  314456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:14:10.760782  314456 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:14:10.760874  314456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:14:11.008847  314456 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:14:11.008872  314456 start.go:495] detecting cgroup driver to use...
	I0919 23:14:11.008907  314456 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:14:11.008956  314456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 23:14:11.025090  314456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:14:11.040151  314456 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:14:11.040238  314456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:14:11.060252  314456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:14:11.078289  314456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:14:11.147399  314456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:14:11.303364  314456 docker.go:234] disabling docker service ...
	I0919 23:14:11.303457  314456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:14:11.326400  314456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:14:11.340322  314456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:14:11.484467  314456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:14:11.561333  314456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:14:11.574426  314456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:14:11.595678  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:14:11.685978  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:14:11.743946  314456 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:14:11.744025  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:14:11.813228  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:14:11.827261  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:14:11.840017  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:14:11.852548  314456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:14:11.868315  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:14:11.884044  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:14:11.899639  314456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:14:11.912487  314456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:14:11.924008  314456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:14:11.935620  314456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:14:12.012697  314456 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:14:12.157503  314456 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 23:14:12.157576  314456 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 23:14:12.162651  314456 start.go:563] Will wait 60s for crictl version
	I0919 23:14:12.162719  314456 ssh_runner.go:195] Run: which crictl
	I0919 23:14:12.166826  314456 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:14:12.206011  314456 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 23:14:12.206090  314456 ssh_runner.go:195] Run: containerd --version
	I0919 23:14:12.233985  314456 ssh_runner.go:195] Run: containerd --version
	I0919 23:14:12.268850  314456 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 23:14:07.799109  316407 out.go:252] * Restarting existing docker container for "newest-cni-312465" ...
	I0919 23:14:07.799250  316407 cli_runner.go:164] Run: docker start newest-cni-312465
	I0919 23:14:08.165502  316407 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:14:08.191333  316407 kic.go:430] container "newest-cni-312465" state is running.
	I0919 23:14:08.191985  316407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:14:08.220318  316407 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/config.json ...
	I0919 23:14:08.220619  316407 machine.go:93] provisionDockerMachine start ...
	I0919 23:14:08.220704  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:08.249108  316407 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:08.249544  316407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I0919 23:14:08.249575  316407 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:14:08.250359  316407 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41604->127.0.0.1:33109: read: connection reset by peer
	I0919 23:14:11.394931  316407 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-312465
	
	I0919 23:14:11.394970  316407 ubuntu.go:182] provisioning hostname "newest-cni-312465"
	I0919 23:14:11.395023  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:11.416702  316407 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:11.416943  316407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I0919 23:14:11.416972  316407 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-312465 && echo "newest-cni-312465" | sudo tee /etc/hostname
	I0919 23:14:11.580189  316407 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-312465
	
	I0919 23:14:11.580280  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:11.602904  316407 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:11.603213  316407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I0919 23:14:11.603249  316407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-312465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-312465/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-312465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:14:11.746229  316407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:14:11.746263  316407 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 23:14:11.746299  316407 ubuntu.go:190] setting up certificates
	I0919 23:14:11.746314  316407 provision.go:84] configureAuth start
	I0919 23:14:11.746382  316407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:14:11.766971  316407 provision.go:143] copyHostCerts
	I0919 23:14:11.767027  316407 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 23:14:11.767039  316407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 23:14:11.809339  316407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 23:14:11.809538  316407 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 23:14:11.809553  316407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 23:14:11.809590  316407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 23:14:11.809685  316407 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 23:14:11.809695  316407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 23:14:11.809720  316407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 23:14:11.809787  316407 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.newest-cni-312465 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-312465]
	I0919 23:14:11.886967  316407 provision.go:177] copyRemoteCerts
	I0919 23:14:11.887021  316407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:14:11.887187  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:11.910808  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:12.013119  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 23:14:12.045568  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0919 23:14:12.080432  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:14:12.113912  316407 provision.go:87] duration metric: took 367.58246ms to configureAuth
	I0919 23:14:12.113947  316407 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:14:12.114239  316407 config.go:182] Loaded profile config "newest-cni-312465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:14:12.114261  316407 machine.go:96] duration metric: took 3.893618945s to provisionDockerMachine
	I0919 23:14:12.114272  316407 start.go:293] postStartSetup for "newest-cni-312465" (driver="docker")
	I0919 23:14:12.114286  316407 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:14:12.114352  316407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:14:12.114401  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:12.138333  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:12.243253  316407 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:14:12.248521  316407 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:14:12.248559  316407 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:14:12.248626  316407 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:14:12.248650  316407 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:14:12.248668  316407 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 23:14:12.248746  316407 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 23:14:12.248850  316407 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 23:14:12.248986  316407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:14:12.261592  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:14:12.296995  316407 start.go:296] duration metric: took 182.703774ms for postStartSetup
	I0919 23:14:12.297110  316407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:14:12.297172  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:12.320307  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:12.423965  316407 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:14:12.429220  316407 fix.go:56] duration metric: took 4.655313325s for fixHost
	I0919 23:14:12.429248  316407 start.go:83] releasing machines lock for "newest-cni-312465", held for 4.655366677s
	I0919 23:14:12.429319  316407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-312465
	I0919 23:14:12.452078  316407 ssh_runner.go:195] Run: cat /version.json
	I0919 23:14:12.452135  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:12.452446  316407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:14:12.452528  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:12.476354  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:12.477830  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:07.810675  316421 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 23:14:07.811012  316421 start.go:159] libmachine.API.Create for "kindnet-896447" (driver="docker")
	I0919 23:14:07.811053  316421 client.go:168] LocalClient.Create starting
	I0919 23:14:07.811166  316421 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 23:14:07.811216  316421 main.go:141] libmachine: Decoding PEM data...
	I0919 23:14:07.811246  316421 main.go:141] libmachine: Parsing certificate...
	I0919 23:14:07.811308  316421 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 23:14:07.811332  316421 main.go:141] libmachine: Decoding PEM data...
	I0919 23:14:07.811348  316421 main.go:141] libmachine: Parsing certificate...
	I0919 23:14:07.811810  316421 cli_runner.go:164] Run: docker network inspect kindnet-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 23:14:07.839117  316421 cli_runner.go:211] docker network inspect kindnet-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 23:14:07.839236  316421 network_create.go:284] running [docker network inspect kindnet-896447] to gather additional debugging logs...
	I0919 23:14:07.839256  316421 cli_runner.go:164] Run: docker network inspect kindnet-896447
	W0919 23:14:07.863663  316421 cli_runner.go:211] docker network inspect kindnet-896447 returned with exit code 1
	I0919 23:14:07.863691  316421 network_create.go:287] error running [docker network inspect kindnet-896447]: docker network inspect kindnet-896447: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-896447 not found
	I0919 23:14:07.863703  316421 network_create.go:289] output of [docker network inspect kindnet-896447]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-896447 not found
	
	** /stderr **
	I0919 23:14:07.863830  316421 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:14:07.891284  316421 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-465af21e2d8d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:b5:e0:20:10:48} reservation:<nil>}
	I0919 23:14:07.892295  316421 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd9dd989b948 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:ff:32:51:a1:28} reservation:<nil>}
	I0919 23:14:07.893272  316421 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d5cd7c460be9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:d8:0f:d3:49:b9} reservation:<nil>}
	I0919 23:14:07.894477  316421 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c37c80}
	I0919 23:14:07.894541  316421 network_create.go:124] attempt to create docker network kindnet-896447 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0919 23:14:07.894603  316421 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-896447 kindnet-896447
	I0919 23:14:08.010898  316421 network_create.go:108] docker network kindnet-896447 192.168.76.0/24 created
	I0919 23:14:08.010930  316421 kic.go:121] calculated static IP "192.168.76.2" for the "kindnet-896447" container
	I0919 23:14:08.010989  316421 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 23:14:08.037265  316421 cli_runner.go:164] Run: docker volume create kindnet-896447 --label name.minikube.sigs.k8s.io=kindnet-896447 --label created_by.minikube.sigs.k8s.io=true
	I0919 23:14:08.065138  316421 oci.go:103] Successfully created a docker volume kindnet-896447
	I0919 23:14:08.065231  316421 cli_runner.go:164] Run: docker run --rm --name kindnet-896447-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-896447 --entrypoint /usr/bin/test -v kindnet-896447:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 23:14:08.576800  316421 oci.go:107] Successfully prepared a docker volume kindnet-896447
	I0919 23:14:08.576842  316421 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:14:08.576864  316421 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 23:14:08.576953  316421 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-896447:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 23:14:11.837198  316421 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-896447:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.260170064s)
	I0919 23:14:11.837237  316421 kic.go:203] duration metric: took 3.260370186s to extract preloaded images to volume ...
	W0919 23:14:11.837327  316421 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 23:14:11.837360  316421 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 23:14:11.837394  316421 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 23:14:11.904498  316421 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-896447 --name kindnet-896447 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-896447 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-896447 --network kindnet-896447 --ip 192.168.76.2 --volume kindnet-896447:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 23:14:12.240562  316421 cli_runner.go:164] Run: docker container inspect kindnet-896447 --format={{.State.Running}}
	I0919 23:14:12.262399  316421 cli_runner.go:164] Run: docker container inspect kindnet-896447 --format={{.State.Status}}
	I0919 23:14:12.286010  316421 cli_runner.go:164] Run: docker exec kindnet-896447 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:14:12.342350  316421 oci.go:144] the created container "kindnet-896447" has a running status.
	I0919 23:14:12.342386  316421 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/kindnet-896447/id_rsa...
	I0919 23:14:12.270695  314456 cli_runner.go:164] Run: docker network inspect auto-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:14:12.292050  314456 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0919 23:14:12.297258  314456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:14:12.313054  314456 kubeadm.go:875] updating cluster {Name:auto-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:auto-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:14:12.313261  314456 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:14:12.313333  314456 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:14:12.367198  314456 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:14:12.367225  314456 containerd.go:534] Images already preloaded, skipping extraction
	I0919 23:14:12.367330  314456 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:14:12.411226  314456 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:14:12.411257  314456 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:14:12.411268  314456 kubeadm.go:926] updating node { 192.168.85.2 8443 v1.34.0 containerd true true} ...
	I0919 23:14:12.411411  314456 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-896447 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:auto-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:14:12.411481  314456 ssh_runner.go:195] Run: sudo crictl info
	I0919 23:14:12.456765  314456 cni.go:84] Creating CNI manager for ""
	I0919 23:14:12.456792  314456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:14:12.456811  314456 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:14:12.456838  314456 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-896447 NodeName:auto-896447 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:14:12.457025  314456 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "auto-896447"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:14:12.457105  314456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:14:12.470628  314456 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:14:12.470705  314456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:14:12.485818  314456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0919 23:14:12.517031  314456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:14:12.549426  314456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I0919 23:14:12.581974  314456 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:14:12.586188  314456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:14:12.607875  314456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:14:12.716116  314456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:14:12.734308  314456 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447 for IP: 192.168.85.2
	I0919 23:14:12.734330  314456 certs.go:194] generating shared ca certs ...
	I0919 23:14:12.734349  314456 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:12.734528  314456 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 23:14:12.734596  314456 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 23:14:12.734608  314456 certs.go:256] generating profile certs ...
	I0919 23:14:12.734682  314456 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/client.key
	I0919 23:14:12.734697  314456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/client.crt with IP's: []
	I0919 23:14:11.958113  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:14:11.958169  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:14:11.958180  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:14:11.958189  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:14:11.958195  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:14:11.958203  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:14:11.958212  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:14:11.958218  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:14:11.958226  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:14:11.958246  294587 retry.go:31] will retry after 7.983039916s: missing components: kube-dns
	I0919 23:14:12.691453  316407 ssh_runner.go:195] Run: systemctl --version
	I0919 23:14:12.696767  316407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:14:12.702814  316407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:14:12.725532  316407 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:14:12.725633  316407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:14:12.740053  316407 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 23:14:12.740094  316407 start.go:495] detecting cgroup driver to use...
	I0919 23:14:12.740142  316407 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:14:12.740209  316407 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 23:14:12.760300  316407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:14:12.777831  316407 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:14:12.777900  316407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:14:12.797565  316407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:14:12.811907  316407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:14:12.886144  316407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:14:12.970365  316407 docker.go:234] disabling docker service ...
	I0919 23:14:12.970436  316407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:14:12.986464  316407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:14:13.001386  316407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:14:13.093150  316407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:14:13.174857  316407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:14:13.188837  316407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:14:13.209840  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:14:13.222137  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:14:13.234133  316407 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:14:13.234215  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:14:13.246797  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:14:13.258358  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:14:13.270014  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:14:13.281450  316407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:14:13.293142  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:14:13.304862  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:14:13.316724  316407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:14:13.330869  316407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:14:13.341489  316407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:14:13.351421  316407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:14:13.422220  316407 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:14:13.547300  316407 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 23:14:13.547384  316407 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 23:14:13.552420  316407 start.go:563] Will wait 60s for crictl version
	I0919 23:14:13.552487  316407 ssh_runner.go:195] Run: which crictl
	I0919 23:14:13.556401  316407 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:14:13.599948  316407 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 23:14:13.600013  316407 ssh_runner.go:195] Run: containerd --version
	I0919 23:14:13.628047  316407 ssh_runner.go:195] Run: containerd --version
	I0919 23:14:13.663378  316407 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 23:14:13.664991  316407 cli_runner.go:164] Run: docker network inspect newest-cni-312465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:14:13.686205  316407 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0919 23:14:13.690770  316407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:14:13.706243  316407 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0919 23:14:13.227771  314456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/client.crt ...
	I0919 23:14:13.227800  314456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/client.crt: {Name:mk0b93185f911e1ed22da3a7e83b7e4a3b4656c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:13.228000  314456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/client.key ...
	I0919 23:14:13.228016  314456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/client.key: {Name:mk5d8a10d021e65e0ea2306996d9fbc7526dffd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:13.228126  314456 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.key.32a1d63c
	I0919 23:14:13.228143  314456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.crt.32a1d63c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0919 23:14:13.333956  314456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.crt.32a1d63c ...
	I0919 23:14:13.333977  314456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.crt.32a1d63c: {Name:mk9a873519aee347cbf22b74bd2d38bc94810c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:13.334191  314456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.key.32a1d63c ...
	I0919 23:14:13.334218  314456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.key.32a1d63c: {Name:mk13d46ff878edeb75b4823e34041b570050680a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:13.334332  314456 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.crt.32a1d63c -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.crt
	I0919 23:14:13.334456  314456 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.key.32a1d63c -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.key
	I0919 23:14:13.334547  314456 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/proxy-client.key
	I0919 23:14:13.334570  314456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/proxy-client.crt with IP's: []
	I0919 23:14:13.453769  314456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/proxy-client.crt ...
	I0919 23:14:13.453805  314456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/proxy-client.crt: {Name:mk18c10847de2c71aad6fd8c8f7c1ebac841e89d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:13.453991  314456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/proxy-client.key ...
	I0919 23:14:13.454007  314456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/proxy-client.key: {Name:mk9b7f5e8c4bb8366e641a0e2b1f8e73849fb5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:13.454279  314456 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 23:14:13.454317  314456 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 23:14:13.454323  314456 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 23:14:13.454353  314456 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:14:13.454394  314456 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:14:13.454423  314456 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 23:14:13.454477  314456 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:14:13.455082  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:14:13.484535  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:14:13.517304  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:14:13.548219  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:14:13.577583  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0919 23:14:13.610100  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:14:13.641092  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:14:13.672332  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/auto-896447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:14:13.702813  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 23:14:13.737270  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 23:14:13.767056  314456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:14:13.799273  314456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:14:13.821884  314456 ssh_runner.go:195] Run: openssl version
	I0919 23:14:13.829600  314456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 23:14:13.842064  314456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 23:14:13.846954  314456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 23:14:13.847023  314456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 23:14:13.854977  314456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:14:13.865906  314456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:14:13.877609  314456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:13.881873  314456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:13.881937  314456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:13.889326  314456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:14:13.901422  314456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 23:14:13.912006  314456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 23:14:13.916442  314456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 23:14:13.916530  314456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 23:14:13.923959  314456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 23:14:13.935586  314456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:14:13.940241  314456 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:14:13.940319  314456 kubeadm.go:392] StartCluster: {Name:auto-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:auto-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:14:13.940396  314456 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 23:14:13.940469  314456 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:14:13.985350  314456 cri.go:89] found id: ""
	I0919 23:14:13.985427  314456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:14:13.995885  314456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:14:14.007969  314456 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:14:14.008036  314456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:14:14.019642  314456 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:14:14.019663  314456 kubeadm.go:157] found existing configuration files:
	
	I0919 23:14:14.019710  314456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:14:14.030726  314456 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:14:14.030782  314456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:14:14.041637  314456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:14:14.053695  314456 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:14:14.053764  314456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:14:14.064445  314456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:14:14.075173  314456 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:14:14.075236  314456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:14:14.087504  314456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:14:14.099013  314456 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:14:14.099077  314456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:14:14.112720  314456 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:14:14.169062  314456 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:14:14.169138  314456 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:14:14.189713  314456 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:14:14.189803  314456 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:14:14.189894  314456 kubeadm.go:310] OS: Linux
	I0919 23:14:14.189972  314456 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:14:14.190087  314456 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:14:14.190193  314456 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:14:14.190265  314456 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:14:14.190344  314456 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:14:14.190423  314456 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:14:14.190492  314456 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:14:14.190547  314456 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:14:14.271081  314456 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:14:14.271250  314456 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:14:14.271370  314456 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:14:14.277841  314456 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:14:13.707636  316407 kubeadm.go:875] updating cluster {Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:14:13.707821  316407 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:14:13.707910  316407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:14:13.746098  316407 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:14:13.746124  316407 containerd.go:534] Images already preloaded, skipping extraction
	I0919 23:14:13.746189  316407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:14:13.783844  316407 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:14:13.783878  316407 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:14:13.783892  316407 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 containerd true true} ...
	I0919 23:14:13.784029  316407 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-312465 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:14:13.784105  316407 ssh_runner.go:195] Run: sudo crictl info
	I0919 23:14:13.825633  316407 cni.go:84] Creating CNI manager for ""
	I0919 23:14:13.825659  316407 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:14:13.825671  316407 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0919 23:14:13.825695  316407 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-312465 NodeName:newest-cni-312465 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:14:13.825851  316407 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-312465"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:14:13.825918  316407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:14:13.837794  316407 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:14:13.837887  316407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:14:13.849348  316407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0919 23:14:13.870115  316407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:14:13.890641  316407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I0919 23:14:13.911598  316407 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:14:13.915811  316407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:14:13.929529  316407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:14:14.000787  316407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:14:14.028231  316407 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465 for IP: 192.168.94.2
	I0919 23:14:14.028254  316407 certs.go:194] generating shared ca certs ...
	I0919 23:14:14.028275  316407 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:14.028432  316407 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 23:14:14.028491  316407 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 23:14:14.028507  316407 certs.go:256] generating profile certs ...
	I0919 23:14:14.028614  316407 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/client.key
	I0919 23:14:14.028693  316407 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key.41c88afb
	I0919 23:14:14.028734  316407 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key
	I0919 23:14:14.028833  316407 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 23:14:14.028868  316407 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 23:14:14.028877  316407 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 23:14:14.028899  316407 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:14:14.028920  316407 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:14:14.028944  316407 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 23:14:14.028982  316407 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:14:14.029670  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:14:14.060556  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:14:14.091021  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:14:14.125694  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:14:14.162025  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 23:14:14.193400  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:14:14.225636  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:14:14.256444  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/newest-cni-312465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:14:14.289567  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:14:14.318029  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 23:14:14.346226  316407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 23:14:14.376307  316407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:14:14.396487  316407 ssh_runner.go:195] Run: openssl version
	I0919 23:14:14.402520  316407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:14:14.415243  316407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:14.419577  316407 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:14.419641  316407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:14.427202  316407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:14:14.437468  316407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 23:14:14.448249  316407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 23:14:14.452004  316407 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 23:14:14.452063  316407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 23:14:14.459481  316407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 23:14:14.470309  316407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 23:14:14.481537  316407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 23:14:14.485192  316407 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 23:14:14.485248  316407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 23:14:14.492363  316407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:14:14.502027  316407 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:14:14.505956  316407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 23:14:14.512880  316407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 23:14:14.522652  316407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 23:14:14.530950  316407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 23:14:14.538030  316407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 23:14:14.545349  316407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 23:14:14.552550  316407 kubeadm.go:392] StartCluster: {Name:newest-cni-312465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-312465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:14:14.552666  316407 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 23:14:14.552715  316407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:14:14.605168  316407 cri.go:89] found id: "3bb98db115b6ad13cceece8b521436100bb04d0ceb273c75d323e94ef7440804"
	I0919 23:14:14.605195  316407 cri.go:89] found id: "03fb64d8cae80bb3a6cbd4e75fb9b8bed32c133d882bac12b3e69b1d615553f9"
	I0919 23:14:14.605201  316407 cri.go:89] found id: "aa4f1d7ae4be8607dc91cdece6dc505e811e83bc72a4d7ac0cf5dbb0e3120d87"
	I0919 23:14:14.605206  316407 cri.go:89] found id: "901a24762656849ac73b160ebe4d6031cc41bae30508e7e9b204baf440837dc2"
	I0919 23:14:14.605210  316407 cri.go:89] found id: "02f3965879829c98ed424d224c8a4ecc467b95a2b385c7eb4440639f1bccf628"
	I0919 23:14:14.605214  316407 cri.go:89] found id: "529122f97b267c7d2c20849ccbcc739630ced21969d0da2315cc2bb32dc0c09e"
	I0919 23:14:14.605218  316407 cri.go:89] found id: "879c323689e20cb30fefa0341fc12a9b42debf5a0380f2c22c16c23aefb17b5e"
	I0919 23:14:14.605222  316407 cri.go:89] found id: ""
	I0919 23:14:14.605282  316407 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0919 23:14:14.626731  316407 cri.go:116] JSON = null
	W0919 23:14:14.626776  316407 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I0919 23:14:14.626824  316407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:14:14.643440  316407 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 23:14:14.643461  316407 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 23:14:14.643521  316407 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 23:14:14.659895  316407 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 23:14:14.660666  316407 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-312465" does not appear in /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:14:14.661078  316407 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14678/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-312465" cluster setting kubeconfig missing "newest-cni-312465" context setting]
	I0919 23:14:14.661838  316407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:14.663926  316407 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 23:14:14.680534  316407 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0919 23:14:14.680570  316407 kubeadm.go:593] duration metric: took 37.103648ms to restartPrimaryControlPlane
	I0919 23:14:14.680582  316407 kubeadm.go:394] duration metric: took 128.045835ms to StartCluster
	I0919 23:14:14.680601  316407 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:14.680657  316407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:14:14.681787  316407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:14.682098  316407 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:14:14.682443  316407 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:14:14.682542  316407 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-312465"
	I0919 23:14:14.682566  316407 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-312465"
	W0919 23:14:14.682578  316407 addons.go:247] addon storage-provisioner should already be in state true
	I0919 23:14:14.682605  316407 host.go:66] Checking if "newest-cni-312465" exists ...
	I0919 23:14:14.682653  316407 addons.go:69] Setting default-storageclass=true in profile "newest-cni-312465"
	I0919 23:14:14.682676  316407 addons.go:69] Setting dashboard=true in profile "newest-cni-312465"
	I0919 23:14:14.682692  316407 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-312465"
	I0919 23:14:14.682696  316407 addons.go:238] Setting addon dashboard=true in "newest-cni-312465"
	I0919 23:14:14.682693  316407 addons.go:69] Setting metrics-server=true in profile "newest-cni-312465"
	W0919 23:14:14.682706  316407 addons.go:247] addon dashboard should already be in state true
	I0919 23:14:14.682722  316407 addons.go:238] Setting addon metrics-server=true in "newest-cni-312465"
	W0919 23:14:14.682733  316407 addons.go:247] addon metrics-server should already be in state true
	I0919 23:14:14.682738  316407 host.go:66] Checking if "newest-cni-312465" exists ...
	I0919 23:14:14.682766  316407 host.go:66] Checking if "newest-cni-312465" exists ...
	I0919 23:14:14.683072  316407 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:14:14.683131  316407 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:14:14.683171  316407 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:14:14.683300  316407 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:14:14.683717  316407 config.go:182] Loaded profile config "newest-cni-312465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:14:14.689018  316407 out.go:179] * Verifying Kubernetes components...
	I0919 23:14:14.690673  316407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:14:14.727141  316407 addons.go:238] Setting addon default-storageclass=true in "newest-cni-312465"
	W0919 23:14:14.727330  316407 addons.go:247] addon default-storageclass should already be in state true
	I0919 23:14:14.727408  316407 host.go:66] Checking if "newest-cni-312465" exists ...
	I0919 23:14:14.728034  316407 cli_runner.go:164] Run: docker container inspect newest-cni-312465 --format={{.State.Status}}
	I0919 23:14:14.729775  316407 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:14:14.732485  316407 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 23:14:14.732514  316407 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:14:14.732751  316407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:14:14.732604  316407 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0919 23:14:14.733040  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:14.734716  316407 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 23:14:14.734882  316407 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 23:14:14.735006  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:14.736607  316407 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0919 23:14:12.833967  316421 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/kindnet-896447/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:14:12.867976  316421 cli_runner.go:164] Run: docker container inspect kindnet-896447 --format={{.State.Status}}
	I0919 23:14:12.888929  316421 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:14:12.888954  316421 kic_runner.go:114] Args: [docker exec --privileged kindnet-896447 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:14:12.943921  316421 cli_runner.go:164] Run: docker container inspect kindnet-896447 --format={{.State.Status}}
	I0919 23:14:12.967055  316421 machine.go:93] provisionDockerMachine start ...
	I0919 23:14:12.967150  316421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-896447
	I0919 23:14:12.987883  316421 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:12.988244  316421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I0919 23:14:12.988261  316421 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:14:13.131477  316421 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-896447
	
	I0919 23:14:13.131514  316421 ubuntu.go:182] provisioning hostname "kindnet-896447"
	I0919 23:14:13.131585  316421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-896447
	I0919 23:14:13.155953  316421 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:13.156192  316421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I0919 23:14:13.156208  316421 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-896447 && echo "kindnet-896447" | sudo tee /etc/hostname
	I0919 23:14:13.311553  316421 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-896447
	
	I0919 23:14:13.311659  316421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-896447
	I0919 23:14:13.333633  316421 main.go:141] libmachine: Using SSH client type: native
	I0919 23:14:13.333942  316421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I0919 23:14:13.333971  316421 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-896447' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-896447/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-896447' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:14:13.477191  316421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:14:13.477226  316421 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 23:14:13.477247  316421 ubuntu.go:190] setting up certificates
	I0919 23:14:13.477259  316421 provision.go:84] configureAuth start
	I0919 23:14:13.477315  316421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-896447
	I0919 23:14:13.499643  316421 provision.go:143] copyHostCerts
	I0919 23:14:13.499712  316421 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 23:14:13.499733  316421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 23:14:13.499803  316421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 23:14:13.499926  316421 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 23:14:13.499939  316421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 23:14:13.499986  316421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 23:14:13.500082  316421 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 23:14:13.500093  316421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 23:14:13.500136  316421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 23:14:13.500229  316421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.kindnet-896447 san=[127.0.0.1 192.168.76.2 kindnet-896447 localhost minikube]
	I0919 23:14:13.774403  316421 provision.go:177] copyRemoteCerts
	I0919 23:14:13.774464  316421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:14:13.774504  316421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-896447
	I0919 23:14:13.797738  316421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/kindnet-896447/id_rsa Username:docker}
	I0919 23:14:13.898422  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0919 23:14:13.928934  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:14:13.960742  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 23:14:13.990831  316421 provision.go:87] duration metric: took 513.561321ms to configureAuth
	I0919 23:14:13.990856  316421 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:14:13.991020  316421 config.go:182] Loaded profile config "kindnet-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:14:13.991031  316421 machine.go:96] duration metric: took 1.023951266s to provisionDockerMachine
	I0919 23:14:13.991038  316421 client.go:171] duration metric: took 6.179978715s to LocalClient.Create
	I0919 23:14:13.991059  316421 start.go:167] duration metric: took 6.180048472s to libmachine.API.Create "kindnet-896447"
	I0919 23:14:13.991071  316421 start.go:293] postStartSetup for "kindnet-896447" (driver="docker")
	I0919 23:14:13.991082  316421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:14:13.991139  316421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:14:13.991208  316421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-896447
	I0919 23:14:14.014007  316421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/kindnet-896447/id_rsa Username:docker}
	I0919 23:14:14.125753  316421 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:14:14.131110  316421 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:14:14.131151  316421 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:14:14.131191  316421 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:14:14.131199  316421 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:14:14.131212  316421 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 23:14:14.131282  316421 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 23:14:14.131379  316421 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 23:14:14.131491  316421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:14:14.144534  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:14:14.179321  316421 start.go:296] duration metric: took 188.234585ms for postStartSetup
	I0919 23:14:14.179794  316421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-896447
	I0919 23:14:14.203994  316421 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/config.json ...
	I0919 23:14:14.204542  316421 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:14:14.204751  316421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-896447
	I0919 23:14:14.226585  316421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/kindnet-896447/id_rsa Username:docker}
	I0919 23:14:14.327556  316421 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:14:14.332879  316421 start.go:128] duration metric: took 6.527836552s to createHost
	I0919 23:14:14.332905  316421 start.go:83] releasing machines lock for "kindnet-896447", held for 6.528006955s
	I0919 23:14:14.332977  316421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-896447
	I0919 23:14:14.353625  316421 ssh_runner.go:195] Run: cat /version.json
	I0919 23:14:14.353683  316421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-896447
	I0919 23:14:14.353762  316421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:14:14.353842  316421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-896447
	I0919 23:14:14.376549  316421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/kindnet-896447/id_rsa Username:docker}
	I0919 23:14:14.376871  316421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/kindnet-896447/id_rsa Username:docker}
	I0919 23:14:14.472738  316421 ssh_runner.go:195] Run: systemctl --version
	I0919 23:14:14.558540  316421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:14:14.564342  316421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:14:14.611592  316421 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:14:14.611694  316421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:14:14.660148  316421 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:14:14.660209  316421 start.go:495] detecting cgroup driver to use...
	I0919 23:14:14.660246  316421 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:14:14.660303  316421 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 23:14:14.683996  316421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:14:14.702943  316421 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:14:14.703000  316421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:14:14.738567  316421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:14:14.783709  316421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:14:14.907036  316421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:14:15.043923  316421 docker.go:234] disabling docker service ...
	I0919 23:14:15.044142  316421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:14:15.084474  316421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:14:15.109803  316421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:14:15.224009  316421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:14:15.323364  316421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:14:15.344143  316421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:14:15.369854  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:14:15.386980  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:14:15.400363  316421 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:14:15.400507  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:14:15.416777  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:14:15.429203  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:14:15.441951  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:14:15.454030  316421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:14:15.466471  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:14:15.481055  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:14:15.496817  316421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:14:15.511775  316421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:14:15.525364  316421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:14:15.538710  316421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:14:15.646137  316421 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:14:15.781720  316421 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 23:14:15.781806  316421 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 23:14:15.787063  316421 start.go:563] Will wait 60s for crictl version
	I0919 23:14:15.787125  316421 ssh_runner.go:195] Run: which crictl
	I0919 23:14:15.792265  316421 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:14:15.853186  316421 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 23:14:15.853341  316421 ssh_runner.go:195] Run: containerd --version
	I0919 23:14:15.887640  316421 ssh_runner.go:195] Run: containerd --version
	I0919 23:14:15.920891  316421 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 23:14:14.738308  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0919 23:14:14.738329  316407 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0919 23:14:14.738394  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:14.764755  316407 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:14:14.764780  316407 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:14:14.764841  316407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-312465
	I0919 23:14:14.771303  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:14.777340  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:14.791575  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:14.805000  316407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/newest-cni-312465/id_rsa Username:docker}
	I0919 23:14:14.882242  316407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:14:14.910606  316407 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:14:14.910697  316407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:14:14.930741  316407 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 23:14:14.930767  316407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 23:14:14.935967  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0919 23:14:14.935993  316407 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0919 23:14:14.939640  316407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:14:14.957552  316407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:14:14.994458  316407 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 23:14:14.994489  316407 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 23:14:14.997621  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0919 23:14:14.997742  316407 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0919 23:14:15.034556  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0919 23:14:15.034585  316407 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0919 23:14:15.066255  316407 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:14:15.066302  316407 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 23:14:15.109246  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0919 23:14:15.109273  316407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0919 23:14:15.125277  316407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:14:15.134149  316407 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:14:15.134213  316407 retry.go:31] will retry after 132.780092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:14:15.134285  316407 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:14:15.134300  316407 retry.go:31] will retry after 191.23981ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:14:15.150244  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0919 23:14:15.150276  316407 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0919 23:14:15.188071  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0919 23:14:15.188099  316407 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0919 23:14:15.226118  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0919 23:14:15.226142  316407 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0919 23:14:15.247203  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0919 23:14:15.247229  316407 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0919 23:14:15.268080  316407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:14:15.280407  316407 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:14:15.280444  316407 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0919 23:14:15.309418  316407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:14:15.326108  316407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:14:15.411042  316407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:14:15.922475  316421 cli_runner.go:164] Run: docker network inspect kindnet-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:14:15.947468  316421 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0919 23:14:15.952833  316421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:14:15.969789  316421 kubeadm.go:875] updating cluster {Name:kindnet-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:14:15.969918  316421 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:14:15.970002  316421 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:14:16.019109  316421 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:14:16.019137  316421 containerd.go:534] Images already preloaded, skipping extraction
	I0919 23:14:16.019207  316421 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:14:16.064817  316421 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:14:16.064846  316421 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:14:16.064858  316421 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 containerd true true} ...
	I0919 23:14:16.064959  316421 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-896447 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:kindnet-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0919 23:14:16.065020  316421 ssh_runner.go:195] Run: sudo crictl info
	I0919 23:14:16.110692  316421 cni.go:84] Creating CNI manager for "kindnet"
	I0919 23:14:16.110722  316421 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:14:16.110751  316421 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-896447 NodeName:kindnet-896447 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:14:16.110896  316421 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kindnet-896447"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:14:16.110970  316421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:14:16.123909  316421 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:14:16.123982  316421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:14:16.135873  316421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0919 23:14:16.164147  316421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:14:16.198671  316421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I0919 23:14:16.226892  316421 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:14:16.232105  316421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:14:16.247558  316421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:14:16.346452  316421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:14:16.372108  316421 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447 for IP: 192.168.76.2
	I0919 23:14:16.372145  316421 certs.go:194] generating shared ca certs ...
	I0919 23:14:16.372197  316421 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:16.372376  316421 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 23:14:16.372433  316421 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 23:14:16.372443  316421 certs.go:256] generating profile certs ...
	I0919 23:14:16.372521  316421 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/client.key
	I0919 23:14:16.372536  316421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/client.crt with IP's: []
	I0919 23:14:16.995330  316421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/client.crt ...
	I0919 23:14:16.995368  316421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/client.crt: {Name:mk756bd659ab6e6d285c45d5259ede088998c137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:16.995567  316421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/client.key ...
	I0919 23:14:16.995584  316421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/client.key: {Name:mk22d198c19d0d72b60c3316938f47df91a6db20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:16.995678  316421 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.key.eefef399
	I0919 23:14:16.995701  316421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.crt.eefef399 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0919 23:14:14.279860  314456 out.go:252]   - Generating certificates and keys ...
	I0919 23:14:14.279973  314456 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:14:14.280059  314456 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:14:14.339607  314456 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:14:14.695648  314456 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:14:15.166427  314456 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:14:15.478284  314456 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:14:16.056979  314456 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:14:16.057145  314456 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-896447 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0919 23:14:16.613509  314456 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:14:16.613709  314456 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-896447 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0919 23:14:16.802042  314456 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:14:17.219182  314456 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:14:17.329623  314456 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:14:17.330259  314456 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:14:17.603770  314456 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:14:17.944879  316407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.819556771s)
	I0919 23:14:17.944912  316407 addons.go:479] Verifying addon metrics-server=true in "newest-cni-312465"
	I0919 23:14:17.944967  316407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.67685062s)
	I0919 23:14:17.945121  316407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.635654715s)
	I0919 23:14:17.945218  316407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.619066208s)
	I0919 23:14:17.945263  316407 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.534186003s)
	I0919 23:14:17.945453  316407 api_server.go:72] duration metric: took 3.263316745s to wait for apiserver process to appear ...
	I0919 23:14:17.945465  316407 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:14:17.945486  316407 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:14:17.950311  316407 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-312465 addons enable metrics-server
	
	I0919 23:14:17.950677  316407 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:14:17.950701  316407 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:14:17.960944  316407 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0919 23:14:18.002769  314456 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:14:18.383355  314456 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:14:18.670472  314456 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:14:18.842477  314456 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:14:18.843208  314456 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:14:18.848514  314456 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:14:17.962534  316407 addons.go:514] duration metric: took 3.280097551s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0919 23:14:18.446348  316407 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:14:18.450811  316407 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:14:18.450836  316407 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:14:18.946338  316407 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:14:18.950671  316407 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:14:18.950695  316407 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:14:19.446401  316407 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:14:19.451960  316407 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0919 23:14:19.453481  316407 api_server.go:141] control plane version: v1.34.0
	I0919 23:14:19.453511  316407 api_server.go:131] duration metric: took 1.508037102s to wait for apiserver health ...
	I0919 23:14:19.453520  316407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:14:19.458758  316407 system_pods.go:59] 9 kube-system pods found
	I0919 23:14:19.458795  316407 system_pods.go:61] "coredns-66bc5c9577-xsnhs" [7a077a85-1f7c-4378-848b-a221d6e520ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:14:19.458803  316407 system_pods.go:61] "etcd-newest-cni-312465" [08794421-938c-46b5-bdf7-c0231507b4c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:14:19.458809  316407 system_pods.go:61] "kindnet-k9944" [ee352ec9-4e85-4bd1-9933-d4bf06151211] Running
	I0919 23:14:19.458816  316407 system_pods.go:61] "kube-apiserver-newest-cni-312465" [d2270076-85fd-4d05-8cc7-540ba3e8e250] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:14:19.458824  316407 system_pods.go:61] "kube-controller-manager-newest-cni-312465" [6e0dfe68-d986-4517-b42d-b3c9399cb136] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:14:19.458848  316407 system_pods.go:61] "kube-proxy-xmkv2" [9950d4ad-cc22-4962-88ac-47beba90840d] Running
	I0919 23:14:19.458856  316407 system_pods.go:61] "kube-scheduler-newest-cni-312465" [1b509b47-9469-4600-82ee-6e262fd24fef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:14:19.458872  316407 system_pods.go:61] "metrics-server-746fcd58dc-sbqxp" [924329ef-c721-4984-b923-8e92b0a66cd5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:14:19.458877  316407 system_pods.go:61] "storage-provisioner" [eff56a05-7e3a-4af0-9c37-7e4b4a5b6334] Running
	I0919 23:14:19.458890  316407 system_pods.go:74] duration metric: took 5.365089ms to wait for pod list to return data ...
	I0919 23:14:19.458900  316407 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:14:19.461999  316407 default_sa.go:45] found service account: "default"
	I0919 23:14:19.462030  316407 default_sa.go:55] duration metric: took 3.124934ms for default service account to be created ...
	I0919 23:14:19.462043  316407 kubeadm.go:578] duration metric: took 4.77991527s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0919 23:14:19.462066  316407 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:14:19.466335  316407 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 23:14:19.466385  316407 node_conditions.go:123] node cpu capacity is 8
	I0919 23:14:19.466402  316407 node_conditions.go:105] duration metric: took 4.330691ms to run NodePressure ...
	I0919 23:14:19.466416  316407 start.go:241] waiting for startup goroutines ...
	I0919 23:14:19.466433  316407 start.go:246] waiting for cluster config update ...
	I0919 23:14:19.466446  316407 start.go:255] writing updated cluster config ...
	I0919 23:14:19.466785  316407 ssh_runner.go:195] Run: rm -f paused
	I0919 23:14:19.520095  316407 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:14:19.522821  316407 out.go:179] * Done! kubectl is now configured to use "newest-cni-312465" cluster and "default" namespace by default
	I0919 23:14:17.916447  316421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.crt.eefef399 ...
	I0919 23:14:17.916502  316421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.crt.eefef399: {Name:mk72e58bfc8302aa7f218e1e79f36883826a855f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:17.916694  316421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.key.eefef399 ...
	I0919 23:14:17.916716  316421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.key.eefef399: {Name:mk3be3aa1c5fa005b492ed8c1826542cf3a64813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:17.916836  316421 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.crt.eefef399 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.crt
	I0919 23:14:17.916971  316421 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.key.eefef399 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.key
	I0919 23:14:17.917066  316421 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/proxy-client.key
	I0919 23:14:17.917093  316421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/proxy-client.crt with IP's: []
	I0919 23:14:18.315401  316421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/proxy-client.crt ...
	I0919 23:14:18.315433  316421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/proxy-client.crt: {Name:mka98fb2ee9fd556c68bcdb49fcf9c592f6611d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:18.315624  316421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/proxy-client.key ...
	I0919 23:14:18.315646  316421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/proxy-client.key: {Name:mkaecc9232a2f403f0979c61521a806d03de4a51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:14:18.315886  316421 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 23:14:18.315932  316421 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 23:14:18.315948  316421 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 23:14:18.315978  316421 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:14:18.316015  316421 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:14:18.316050  316421 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 23:14:18.316102  316421 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:14:18.316917  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:14:18.346101  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:14:18.375053  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:14:18.403450  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:14:18.431536  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 23:14:18.461372  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 23:14:18.491092  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:14:18.521880  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kindnet-896447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:14:18.549370  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 23:14:18.584079  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:14:18.614787  316421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 23:14:18.642628  316421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:14:18.663602  316421 ssh_runner.go:195] Run: openssl version
	I0919 23:14:18.669498  316421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 23:14:18.680793  316421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 23:14:18.685044  316421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 23:14:18.685124  316421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 23:14:18.693071  316421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:14:18.704026  316421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:14:18.714870  316421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:18.718981  316421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:18.719040  316421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:14:18.726457  316421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:14:18.737578  316421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 23:14:18.749196  316421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 23:14:18.753603  316421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 23:14:18.753693  316421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 23:14:18.763035  316421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 23:14:18.774430  316421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:14:18.778601  316421 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:14:18.778671  316421 kubeadm.go:392] StartCluster: {Name:kindnet-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DN
SDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:14:18.778773  316421 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 23:14:18.778834  316421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:14:18.819443  316421 cri.go:89] found id: ""
	I0919 23:14:18.819505  316421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:14:18.829703  316421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:14:18.841921  316421 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:14:18.841987  316421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:14:18.853980  316421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:14:18.854000  316421 kubeadm.go:157] found existing configuration files:
	
	I0919 23:14:18.854058  316421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:14:18.866028  316421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:14:18.866093  316421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:14:18.876375  316421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:14:18.886428  316421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:14:18.886493  316421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:14:18.897483  316421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:14:18.909787  316421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:14:18.909855  316421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:14:18.919563  316421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:14:18.929692  316421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:14:18.929751  316421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:14:18.939921  316421 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:14:19.006215  316421 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:14:19.065580  316421 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:14:18.853869  314456 out.go:252]   - Booting up control plane ...
	I0919 23:14:18.854010  314456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:14:18.854104  314456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:14:18.854221  314456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:14:18.868490  314456 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:14:18.868676  314456 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:14:18.875634  314456 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:14:18.876039  314456 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:14:18.876105  314456 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:14:18.956643  314456 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:14:18.956872  314456 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:14:19.459695  314456 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.757938ms
	I0919 23:14:19.463674  314456 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:14:19.463889  314456 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0919 23:14:19.464288  314456 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:14:19.464400  314456 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:14:21.546363  314456 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.083006654s
	I0919 23:14:22.533017  314456 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.069813223s
	I0919 23:14:19.946192  294587 system_pods.go:86] 8 kube-system pods found
	I0919 23:14:19.946232  294587 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:14:19.946241  294587 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running
	I0919 23:14:19.946254  294587 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running
	I0919 23:14:19.946260  294587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running
	I0919 23:14:19.946265  294587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running
	I0919 23:14:19.946272  294587 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:14:19.946278  294587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running
	I0919 23:14:19.946283  294587 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running
	I0919 23:14:19.946301  294587 retry.go:31] will retry after 13.760304207s: missing components: kube-dns
	I0919 23:14:24.466359  314456 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.003071972s
	I0919 23:14:24.482083  314456 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:14:24.497329  314456 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:14:24.510940  314456 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:14:24.511255  314456 kubeadm.go:310] [mark-control-plane] Marking the node auto-896447 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:14:24.522441  314456 kubeadm.go:310] [bootstrap-token] Using token: mvn47u.kvywt9aqphew3u0u
	I0919 23:14:24.524126  314456 out.go:252]   - Configuring RBAC rules ...
	I0919 23:14:24.524338  314456 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:14:24.529959  314456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:14:24.539314  314456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:14:24.544774  314456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:14:24.549632  314456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:14:24.553193  314456 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:14:24.874613  314456 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:14:25.298841  314456 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:14:25.875962  314456 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:14:25.879128  314456 kubeadm.go:310] 
	I0919 23:14:25.879289  314456 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:14:25.879297  314456 kubeadm.go:310] 
	I0919 23:14:25.879447  314456 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:14:25.879469  314456 kubeadm.go:310] 
	I0919 23:14:25.879530  314456 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:14:25.879631  314456 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:14:25.879832  314456 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:14:25.879854  314456 kubeadm.go:310] 
	I0919 23:14:25.879980  314456 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:14:25.880005  314456 kubeadm.go:310] 
	I0919 23:14:25.880095  314456 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:14:25.880103  314456 kubeadm.go:310] 
	I0919 23:14:25.880206  314456 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:14:25.880289  314456 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:14:25.880364  314456 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:14:25.880369  314456 kubeadm.go:310] 
	I0919 23:14:25.880771  314456 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:14:25.880907  314456 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:14:25.880915  314456 kubeadm.go:310] 
	I0919 23:14:25.881012  314456 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mvn47u.kvywt9aqphew3u0u \
	I0919 23:14:25.881141  314456 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 \
	I0919 23:14:25.881181  314456 kubeadm.go:310] 	--control-plane 
	I0919 23:14:25.881187  314456 kubeadm.go:310] 
	I0919 23:14:25.881306  314456 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:14:25.881313  314456 kubeadm.go:310] 
	I0919 23:14:25.881421  314456 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mvn47u.kvywt9aqphew3u0u \
	I0919 23:14:25.881648  314456 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 
	I0919 23:14:25.886959  314456 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:14:25.887193  314456 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:14:25.887316  314456 cni.go:84] Creating CNI manager for ""
	I0919 23:14:25.888322  314456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:14:25.893657  314456 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ff15da67a3dd2       409467f978b4a       9 seconds ago       Running             kindnet-cni               1                   aae1d5a52934e       kindnet-k9944
	84fd9d8b5e329       6e38f40d628db       10 seconds ago      Running             storage-provisioner       1                   7f471c7ae7612       storage-provisioner
	876a049bd2c09       df0860106674d       10 seconds ago      Running             kube-proxy                1                   5b4da5b4ca0dd       kube-proxy-xmkv2
	5d374a5f9fa61       46169d968e920       13 seconds ago      Running             kube-scheduler            1                   41f093272882b       kube-scheduler-newest-cni-312465
	330d509e7f38b       a0af72f2ec6d6       13 seconds ago      Running             kube-controller-manager   1                   88becf992f287       kube-controller-manager-newest-cni-312465
	40de363bc7b2f       90550c43ad2bc       13 seconds ago      Running             kube-apiserver            1                   f3cd2f7aac9a4       kube-apiserver-newest-cni-312465
	d166726518342       5f1f5298c888d       13 seconds ago      Running             etcd                      1                   e9df11abe0829       etcd-newest-cni-312465
	3bb98db115b6a       6e38f40d628db       24 seconds ago      Exited              storage-provisioner       0                   38b277a881dd9       storage-provisioner
	03fb64d8cae80       409467f978b4a       24 seconds ago      Exited              kindnet-cni               0                   3409b95e609ad       kindnet-k9944
	aa4f1d7ae4be8       df0860106674d       24 seconds ago      Exited              kube-proxy                0                   e562e80fea19d       kube-proxy-xmkv2
	901a247626568       5f1f5298c888d       36 seconds ago      Exited              etcd                      0                   edb938e8e5c29       etcd-newest-cni-312465
	02f3965879829       46169d968e920       36 seconds ago      Exited              kube-scheduler            0                   ff450beb2bbf3       kube-scheduler-newest-cni-312465
	529122f97b267       a0af72f2ec6d6       36 seconds ago      Exited              kube-controller-manager   0                   4d43b2e1857a3       kube-controller-manager-newest-cni-312465
	879c323689e20       90550c43ad2bc       36 seconds ago      Exited              kube-apiserver            0                   4ad178b48b93e       kube-apiserver-newest-cni-312465
	
	
	==> containerd <==
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.488977525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-k9944,Uid:ee352ec9-4e85-4bd1-9933-d4bf06151211,Namespace:kube-system,Attempt:1,}"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.497182817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xsnhs,Uid:7a077a85-1f7c-4378-848b-a221d6e520ff,Namespace:kube-system,Attempt:0,}"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.503624877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-746fcd58dc-sbqxp,Uid:924329ef-c721-4984-b923-8e92b0a66cd5,Namespace:kube-system,Attempt:0,}"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.507297227Z" level=info msg="StopPodSandbox for \"38b277a881dd91bca1d021ab6c910c3367f41b17aa5c7ea43f70265bf4c6f012\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.507359510Z" level=info msg="Container to stop \"3bb98db115b6ad13cceece8b521436100bb04d0ceb273c75d323e94ef7440804\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.507471325Z" level=info msg="TearDown network for sandbox \"38b277a881dd91bca1d021ab6c910c3367f41b17aa5c7ea43f70265bf4c6f012\" successfully"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.507485794Z" level=info msg="StopPodSandbox for \"38b277a881dd91bca1d021ab6c910c3367f41b17aa5c7ea43f70265bf4c6f012\" returns successfully"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.516132935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:eff56a05-7e3a-4af0-9c37-7e4b4a5b6334,Namespace:kube-system,Attempt:1,}"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.595659163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-746fcd58dc-sbqxp,Uid:924329ef-c721-4984-b923-8e92b0a66cd5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a0340f86a5d931cfd7b3b8e068c131613cdb373e49caa5aa71fdf57df7cc627\": failed to find network info for sandbox \"2a0340f86a5d931cfd7b3b8e068c131613cdb373e49caa5aa71fdf57df7cc627\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.596895856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xsnhs,Uid:7a077a85-1f7c-4378-848b-a221d6e520ff,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5935d5f1bcd9300bc8353efd02954304aafc6be6d0901cc51c2733b6f256201\": failed to find network info for sandbox \"a5935d5f1bcd9300bc8353efd02954304aafc6be6d0901cc51c2733b6f256201\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.616703717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xmkv2,Uid:9950d4ad-cc22-4962-88ac-47beba90840d,Namespace:kube-system,Attempt:1,} returns sandbox id \"5b4da5b4ca0dd4fda445fe75a7f52a7276b68d234902ff8cfa5275ffb63ce4a7\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.623591434Z" level=info msg="CreateContainer within sandbox \"5b4da5b4ca0dd4fda445fe75a7f52a7276b68d234902ff8cfa5275ffb63ce4a7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.638133937Z" level=info msg="CreateContainer within sandbox \"5b4da5b4ca0dd4fda445fe75a7f52a7276b68d234902ff8cfa5275ffb63ce4a7\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"876a049bd2c0944bdc54c0e0c28dacdc47d4c43e6f9debfb640fb0ae0ebc09e3\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.639304810Z" level=info msg="StartContainer for \"876a049bd2c0944bdc54c0e0c28dacdc47d4c43e6f9debfb640fb0ae0ebc09e3\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.708909652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:eff56a05-7e3a-4af0-9c37-7e4b4a5b6334,Namespace:kube-system,Attempt:1,} returns sandbox id \"7f471c7ae7612cfa00ea52b54ff36497fa8cc69aeceaca2f058b6194be5bd478\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.719615743Z" level=info msg="CreateContainer within sandbox \"7f471c7ae7612cfa00ea52b54ff36497fa8cc69aeceaca2f058b6194be5bd478\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.741673566Z" level=info msg="CreateContainer within sandbox \"7f471c7ae7612cfa00ea52b54ff36497fa8cc69aeceaca2f058b6194be5bd478\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"84fd9d8b5e329e51d64ecd20963e0a6601ce8ea362348c6113030b16cc394f0a\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.743876737Z" level=info msg="StartContainer for \"84fd9d8b5e329e51d64ecd20963e0a6601ce8ea362348c6113030b16cc394f0a\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.769769971Z" level=info msg="StartContainer for \"876a049bd2c0944bdc54c0e0c28dacdc47d4c43e6f9debfb640fb0ae0ebc09e3\" returns successfully"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.848931830Z" level=info msg="StartContainer for \"84fd9d8b5e329e51d64ecd20963e0a6601ce8ea362348c6113030b16cc394f0a\" returns successfully"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.887714735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-k9944,Uid:ee352ec9-4e85-4bd1-9933-d4bf06151211,Namespace:kube-system,Attempt:1,} returns sandbox id \"aae1d5a52934e3769ade3546999d71c8add1fa66d3f74096a3992aef396e6c6e\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.893645141Z" level=info msg="CreateContainer within sandbox \"aae1d5a52934e3769ade3546999d71c8add1fa66d3f74096a3992aef396e6c6e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.913489562Z" level=info msg="CreateContainer within sandbox \"aae1d5a52934e3769ade3546999d71c8add1fa66d3f74096a3992aef396e6c6e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"ff15da67a3dd2104eb9d299adf7cce30bf5157e1ed3c5ea757353118130340cc\""
	Sep 19 23:14:17 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:17.914284930Z" level=info msg="StartContainer for \"ff15da67a3dd2104eb9d299adf7cce30bf5157e1ed3c5ea757353118130340cc\""
	Sep 19 23:14:18 newest-cni-312465 containerd[426]: time="2025-09-19T23:14:18.085237016Z" level=info msg="StartContainer for \"ff15da67a3dd2104eb9d299adf7cce30bf5157e1ed3c5ea757353118130340cc\" returns successfully"
	
	
	==> describe nodes <==
	Name:               newest-cni-312465
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-312465
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=newest-cni-312465
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_13_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:13:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-312465
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:14:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:14:17 +0000   Fri, 19 Sep 2025 23:13:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:14:17 +0000   Fri, 19 Sep 2025 23:13:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:14:17 +0000   Fri, 19 Sep 2025 23:13:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:14:17 +0000   Fri, 19 Sep 2025 23:13:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-312465
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 c74d3be6add3408da233db9049d6523b
	  System UUID:                0f85fc34-fc6d-40d4-accc-1a556e194ee2
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-xsnhs                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-newest-cni-312465                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-k9944                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-312465              250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-312465     200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-xmkv2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-312465              100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 metrics-server-746fcd58dc-sbqxp               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         24s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wpkq4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-c6hgc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 9s    kube-proxy       
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node newest-cni-312465 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  32s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node newest-cni-312465 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node newest-cni-312465 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  RegisteredNode           27s   node-controller  Node newest-cni-312465 event: Registered Node newest-cni-312465 in Controller
	  Normal  RegisteredNode           8s    node-controller  Node newest-cni-312465 event: Registered Node newest-cni-312465 in Controller
	  Normal  Starting                 6s    kubelet          Starting kubelet.
	  Normal  Starting                 5s    kubelet          Starting kubelet.
	  Normal  Starting                 4s    kubelet          Starting kubelet.
	  Normal  Starting                 3s    kubelet          Starting kubelet.
	  Normal  Starting                 3s    kubelet          Starting kubelet.
	  Normal  Starting                 2s    kubelet          Starting kubelet.
	  Normal  Starting                 1s    kubelet          Starting kubelet.
	
	
	==> dmesg <==
	[  +0.995983] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.504855] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500830] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.994952] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.505945] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501588] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.993278] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.507907] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500995] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.992251] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.509112] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501500] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989799] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.510643] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501653] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989383] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.511830] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500482] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989056] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.512088] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500241] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501649] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501291] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501422] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[Sep19 23:13] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	
	
	==> etcd [901a24762656849ac73b160ebe4d6031cc41bae30508e7e9b204baf440837dc2] <==
	{"level":"warn","ts":"2025-09-19T23:13:52.611673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.620654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.628338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.637058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.645037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.660031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.667997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.675896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.684637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.692625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.700709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.709416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.719558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.728829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.736466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.745589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.770298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.789669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.796406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.806579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.817747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:52.898374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42060","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T23:14:04.761663Z","caller":"traceutil/trace.go:172","msg":"trace[1275008077] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"143.202837ms","start":"2025-09-19T23:14:04.618439Z","end":"2025-09-19T23:14:04.761642Z","steps":["trace[1275008077] 'process raft request'  (duration: 63.366739ms)","trace[1275008077] 'compare'  (duration: 79.73343ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:14:04.896581Z","caller":"traceutil/trace.go:172","msg":"trace[1546239987] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"130.473535ms","start":"2025-09-19T23:14:04.766079Z","end":"2025-09-19T23:14:04.896552Z","steps":["trace[1546239987] 'process raft request'  (duration: 130.391568ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:14:04.896619Z","caller":"traceutil/trace.go:172","msg":"trace[1784497646] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"130.637142ms","start":"2025-09-19T23:14:04.765936Z","end":"2025-09-19T23:14:04.896573Z","steps":["trace[1784497646] 'process raft request'  (duration: 104.769724ms)","trace[1784497646] 'compare'  (duration: 25.620753ms)"],"step_count":2}
	
	
	==> etcd [d16672651834257c44a7b8bb09a2a96900893d4c44bf6bc2f77df309038082a3] <==
	{"level":"warn","ts":"2025-09-19T23:14:16.141970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.149711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.173294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.181874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.190554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.198738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.208363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.217317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.228731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.236617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.244347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.252923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.260019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.268636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.283526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.293896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.303819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.312817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.321723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.330749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.339876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.359652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.364882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.374731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:14:16.458009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36786","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:14:28 up  1:56,  0 users,  load average: 5.28, 4.10, 2.54
	Linux newest-cni-312465 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [03fb64d8cae80bb3a6cbd4e75fb9b8bed32c133d882bac12b3e69b1d615553f9] <==
	I0919 23:14:03.726806       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0919 23:14:03.727181       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0919 23:14:03.727503       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:14:03.727526       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:14:03.819369       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:14:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:14:04.119672       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:14:04.119853       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:14:04.119882       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:14:04.120205       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0919 23:14:04.620400       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:14:04.620428       1 metrics.go:72] Registering metrics
	I0919 23:14:04.620484       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kindnet [ff15da67a3dd2104eb9d299adf7cce30bf5157e1ed3c5ea757353118130340cc] <==
	I0919 23:14:18.358445       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0919 23:14:18.358781       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0919 23:14:18.358909       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:14:18.358931       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:14:18.358963       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:14:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:14:18.756607       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:14:18.756675       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:14:18.756698       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:14:18.757940       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0919 23:14:19.057688       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:14:19.057724       1 metrics.go:72] Registering metrics
	I0919 23:14:19.057813       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [40de363bc7b2f50d2bd03fca2ad5e8490b8a75e657c34924508263e639e1e39f] <==
	I0919 23:14:20.630097       1 controller.go:667] quota admission added evaluator for: endpoints
	I0919 23:14:22.522612       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 23:14:22.525029       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	{"level":"warn","ts":"2025-09-19T23:14:23.664280Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0006014a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0919 23:14:23.664466       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:14:23.664493       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 23:14:23.664521       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.046µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0919 23:14:23.665898       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:14:23.666050       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.726966ms" method="POST" path="/api/v1/namespaces/default/events" result=null
	E0919 23:14:25.977267       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:14:25.977310       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	{"level":"warn","ts":"2025-09-19T23:14:25.977364Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00227e1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0919 23:14:25.977511       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 166.893µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0919 23:14:25.978651       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:14:25.978963       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.87758ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/newest-cni-312465" result=null
	E0919 23:14:26.912824       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"client disconnected\"}: client disconnected" logger="UnhandledError"
	E0919 23:14:26.912979       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:14:26.914322       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 23:14:26.914368       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:14:26.915634       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.850365ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/newest-cni-312465" result=null
	E0919 23:14:27.682221       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"client disconnected\"}: client disconnected" logger="UnhandledError"
	E0919 23:14:27.682381       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:14:27.683521       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 23:14:27.683564       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:14:27.684812       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.619746ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/newest-cni-312465" result=null
	
	
	==> kube-apiserver [879c323689e20cb30fefa0341fc12a9b42debf5a0380f2c22c16c23aefb17b5e] <==
	I0919 23:13:56.355505       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0919 23:13:56.365521       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 23:14:02.067292       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 23:14:02.267485       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 23:14:02.369080       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 23:14:02.374977       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 23:14:04.361918       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 23:14:04.366861       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:14:04.366930       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0919 23:14:04.366994       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0919 23:14:04.762362       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.97.148.35"}
	W0919 23:14:04.899652       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:14:04.899829       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0919 23:14:04.902748       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	W0919 23:14:04.907664       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:14:04.907735       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-controller-manager [330d509e7f38bc22edbf12409609e434f6b3103c1619d9e3d23b87443e86e201] <==
	I0919 23:14:20.573099       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0919 23:14:20.573230       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 23:14:20.573419       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 23:14:20.573677       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 23:14:20.573700       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 23:14:20.575355       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 23:14:20.577460       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0919 23:14:20.578110       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 23:14:20.579482       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 23:14:20.579610       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 23:14:20.579788       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 23:14:20.585279       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 23:14:20.585512       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 23:14:20.585668       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 23:14:20.586025       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:14:20.607274       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:14:20.613764       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0919 23:14:20.617145       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 23:14:20.617364       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 23:14:20.617447       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-312465"
	I0919 23:14:20.617535       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 23:14:20.623489       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 23:14:20.623551       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0919 23:14:20.639065       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:14:22.405344       1 request.go:752] "Waited before sending request" delay="1.582482942s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://192.168.94.2:8443/api/v1/namespaces/kube-system/serviceaccounts/deployment-controller/token"
	
	
	==> kube-controller-manager [529122f97b267c7d2c20849ccbcc739630ced21969d0da2315cc2bb32dc0c09e] <==
	I0919 23:14:01.416297       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 23:14:01.416379       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-312465"
	I0919 23:14:01.416426       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 23:14:01.416484       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 23:14:01.417869       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 23:14:01.421010       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 23:14:01.421535       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 23:14:01.421630       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:14:01.421649       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0919 23:14:01.421673       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0919 23:14:01.421675       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 23:14:01.421685       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 23:14:01.421773       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 23:14:01.421973       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 23:14:01.422070       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 23:14:01.422097       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 23:14:01.423727       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0919 23:14:01.423846       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 23:14:01.423962       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 23:14:01.426612       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:14:01.428935       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 23:14:01.438991       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 23:14:01.445291       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:14:01.448514       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	E0919 23:14:04.399416       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-746fcd58dc\" failed with pods \"metrics-server-746fcd58dc-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [876a049bd2c0944bdc54c0e0c28dacdc47d4c43e6f9debfb640fb0ae0ebc09e3] <==
	I0919 23:14:17.821294       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:14:17.892060       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:14:17.992276       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:14:17.992321       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0919 23:14:17.992455       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:14:18.020801       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:14:18.020874       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:14:18.027755       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:14:18.028231       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:14:18.028270       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:14:18.029958       1 config.go:309] "Starting node config controller"
	I0919 23:14:18.029975       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:14:18.029983       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:14:18.030215       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:14:18.030283       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:14:18.030371       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:14:18.030321       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:14:18.030239       1 config.go:200] "Starting service config controller"
	I0919 23:14:18.030452       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:14:18.131363       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 23:14:18.131391       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 23:14:18.131374       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [aa4f1d7ae4be8607dc91cdece6dc505e811e83bc72a4d7ac0cf5dbb0e3120d87] <==
	I0919 23:14:03.307101       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:14:03.397896       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:14:03.499103       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:14:03.499145       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0919 23:14:03.499415       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:14:03.546823       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:14:03.546952       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:14:03.555595       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:14:03.559926       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:14:03.559980       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:14:03.562690       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:14:03.562714       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:14:03.563203       1 config.go:200] "Starting service config controller"
	I0919 23:14:03.563214       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:14:03.564543       1 config.go:309] "Starting node config controller"
	I0919 23:14:03.564559       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:14:03.564566       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:14:03.567766       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:14:03.567943       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:14:03.663436       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 23:14:03.663445       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 23:14:03.668915       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [02f3965879829c98ed424d224c8a4ecc467b95a2b385c7eb4440639f1bccf628] <==
	E0919 23:13:53.464021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 23:13:53.464841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 23:13:53.464844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 23:13:53.464914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 23:13:53.464698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 23:13:54.359266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 23:13:54.384937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 23:13:54.429091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 23:13:54.434372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 23:13:54.478498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 23:13:54.495097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 23:13:54.594486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 23:13:54.643484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 23:13:54.745929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 23:13:54.770074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 23:13:54.792464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 23:13:54.849171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 23:13:54.866839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0919 23:13:54.896416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 23:13:54.897327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 23:13:54.923395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 23:13:54.957107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 23:13:55.044298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 23:13:55.052558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I0919 23:13:57.457519       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [5d374a5f9fa61d8c19b81dd5eb8f4477bb61006527c2942d6afa885f27f4d80d] <==
	I0919 23:14:16.194618       1 serving.go:386] Generated self-signed cert in-memory
	I0919 23:14:17.061424       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 23:14:17.061468       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:14:17.069921       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0919 23:14:17.069969       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0919 23:14:17.070039       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:14:17.070068       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:14:17.070098       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 23:14:17.070112       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 23:14:17.074659       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 23:14:17.074775       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 23:14:17.171014       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0919 23:14:17.171197       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:14:17.172494       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.392754    2697 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: E0919 23:14:28.393097    2697 file_linux.go:61] "Unable to read config path" err="unable to create inotify: too many open files" path="/etc/kubernetes/manifests"
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.394149    2697 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.7.27" apiVersion="v1"
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.395210    2697 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.395403    2697 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: E0919 23:14:28.395506    2697 plugins.go:580] "Error initializing dynamic plugin prober" err="error initializing watcher: too many open files"
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.398471    2697 server.go:1262] "Started kubelet"
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.399389    2697 server.go:180] "Starting to listen" address="0.0.0.0" port=10250
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.401126    2697 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.401555    2697 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.401770    2697 server_v1.go:49] "podresources" method="list" useActivePods=true
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.402061    2697 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.404317    2697 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: E0919 23:14:28.404482    2697 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.406626    2697 server.go:310] "Adding debug handlers to kubelet server"
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.408138    2697 volume_manager.go:313] "Starting Kubelet Volume Manager"
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.410924    2697 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.412922    2697 reconciler.go:29] "Reconciler: start to sync state"
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.414216    2697 factory.go:223] Registration of the systemd container factory successfully
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.414358    2697 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: I0919 23:14:28.417263    2697 factory.go:223] Registration of the containerd container factory successfully
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: E0919 23:14:28.417295    2697 manager.go:294] Registration of the raw container factory failed: inotify_init: too many open files
	Sep 19 23:14:28 newest-cni-312465 kubelet[2697]: E0919 23:14:28.417316    2697 kubelet.go:1686] "Failed to start cAdvisor" err="inotify_init: too many open files"
	Sep 19 23:14:28 newest-cni-312465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Sep 19 23:14:28 newest-cni-312465 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	
	==> storage-provisioner [3bb98db115b6ad13cceece8b521436100bb04d0ceb273c75d323e94ef7440804] <==
	I0919 23:14:03.784231       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 23:14:03.796533       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 23:14:03.796587       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0919 23:14:03.800242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:14:03.808034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0919 23:14:03.808286       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 23:14:03.808672       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-312465_a3e2dca0-2fc8-4412-9a2e-1720b60169f2!
	I0919 23:14:03.809402       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"050de51a-8669-4e49-a7e4-ac16a3fefa25", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-312465_a3e2dca0-2fc8-4412-9a2e-1720b60169f2 became leader
	W0919 23:14:03.818056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:14:03.826974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0919 23:14:03.909772       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-312465_a3e2dca0-2fc8-4412-9a2e-1720b60169f2!
	
	
	==> storage-provisioner [84fd9d8b5e329e51d64ecd20963e0a6601ce8ea362348c6113030b16cc394f0a] <==
	I0919 23:14:17.865524       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-312465 -n newest-cni-312465
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-312465 -n newest-cni-312465: exit status 2 (393.623357ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-312465 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-xsnhs metrics-server-746fcd58dc-sbqxp dashboard-metrics-scraper-6ffb444bf9-wpkq4 kubernetes-dashboard-855c9754f9-c6hgc
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-312465 describe pod coredns-66bc5c9577-xsnhs metrics-server-746fcd58dc-sbqxp dashboard-metrics-scraper-6ffb444bf9-wpkq4 kubernetes-dashboard-855c9754f9-c6hgc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-312465 describe pod coredns-66bc5c9577-xsnhs metrics-server-746fcd58dc-sbqxp dashboard-metrics-scraper-6ffb444bf9-wpkq4 kubernetes-dashboard-855c9754f9-c6hgc: exit status 1 (87.208471ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-xsnhs" not found
	Error from server (NotFound): pods "metrics-server-746fcd58dc-sbqxp" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-wpkq4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-c6hgc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-312465 describe pod coredns-66bc5c9577-xsnhs metrics-server-746fcd58dc-sbqxp dashboard-metrics-scraper-6ffb444bf9-wpkq4 kubernetes-dashboard-855c9754f9-c6hgc: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-149888 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888
E0919 23:16:30.641053   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/old-k8s-version-757990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888: exit status 2 (340.743784ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888: exit status 2 (388.361186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-149888 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888: exit status 2 (386.885055ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888: exit status 2 (361.633093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-149888
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-149888:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "099d669e8ec59e0380498b09d42c20dc8bf9ac466be9ffe01804379b352bff31",
	        "Created": "2025-09-19T23:12:53.067980944Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 341269,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T23:15:26.233239415Z",
	            "FinishedAt": "2025-09-19T23:15:25.162757657Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/099d669e8ec59e0380498b09d42c20dc8bf9ac466be9ffe01804379b352bff31/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/099d669e8ec59e0380498b09d42c20dc8bf9ac466be9ffe01804379b352bff31/hostname",
	        "HostsPath": "/var/lib/docker/containers/099d669e8ec59e0380498b09d42c20dc8bf9ac466be9ffe01804379b352bff31/hosts",
	        "LogPath": "/var/lib/docker/containers/099d669e8ec59e0380498b09d42c20dc8bf9ac466be9ffe01804379b352bff31/099d669e8ec59e0380498b09d42c20dc8bf9ac466be9ffe01804379b352bff31-json.log",
	        "Name": "/default-k8s-diff-port-149888",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-149888:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-149888",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "099d669e8ec59e0380498b09d42c20dc8bf9ac466be9ffe01804379b352bff31",
	                "LowerDir": "/var/lib/docker/overlay2/d230332dd75b686ff154fa7b4d62c612019f129f809a566c88f37f91a9e6f165-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d230332dd75b686ff154fa7b4d62c612019f129f809a566c88f37f91a9e6f165/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d230332dd75b686ff154fa7b4d62c612019f129f809a566c88f37f91a9e6f165/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d230332dd75b686ff154fa7b4d62c612019f129f809a566c88f37f91a9e6f165/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-149888",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-149888/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-149888",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-149888",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-149888",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf3f0544294ce8810b61d63756a175d4ee318bfeaca508f45aa96fab666a84f7",
	            "SandboxKey": "/var/run/docker/netns/cf3f0544294c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-149888": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:4b:e2:b3:29:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0971ed35276bed5da4f47ad531607cc67550d8b9076fbbdee7b98bcf6f2f6f37",
	                    "EndpointID": "ea06d3789a79938a414a89b4fc1901c8e31ca6f4fc05a030da47abc9649a2f4c",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-149888",
	                        "099d669e8ec5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888: exit status 2 (374.948075ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-149888 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-149888 logs -n 25: (1.99865799s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-896447 sudo systemctl status docker --all --full --no-pager                                                                                                         │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │                     │
	│ ssh     │ -p kindnet-896447 sudo systemctl cat docker --no-pager                                                                                                                         │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo cat /etc/docker/daemon.json                                                                                                                             │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │                     │
	│ ssh     │ -p kindnet-896447 sudo docker system info                                                                                                                                      │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │                     │
	│ ssh     │ -p kindnet-896447 sudo systemctl status cri-docker --all --full --no-pager                                                                                                     │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-149888 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                        │ default-k8s-diff-port-149888 │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ start   │ -p default-k8s-diff-port-149888 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ default-k8s-diff-port-149888 │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:16 UTC │
	│ ssh     │ -p kindnet-896447 sudo systemctl cat cri-docker --no-pager                                                                                                                     │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │                     │
	│ ssh     │ -p kindnet-896447 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                          │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo cri-dockerd --version                                                                                                                                   │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo systemctl status containerd --all --full --no-pager                                                                                                     │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo systemctl cat containerd --no-pager                                                                                                                     │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo cat /lib/systemd/system/containerd.service                                                                                                              │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo cat /etc/containerd/config.toml                                                                                                                         │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo containerd config dump                                                                                                                                  │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo systemctl status crio --all --full --no-pager                                                                                                           │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │                     │
	│ ssh     │ -p kindnet-896447 sudo systemctl cat crio --no-pager                                                                                                                           │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                 │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo crio config                                                                                                                                             │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ delete  │ -p kindnet-896447                                                                                                                                                              │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ start   │ -p enable-default-cni-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd          │ enable-default-cni-896447    │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │                     │
	│ image   │ default-k8s-diff-port-149888 image list --format=json                                                                                                                          │ default-k8s-diff-port-149888 │ jenkins │ v1.37.0 │ 19 Sep 25 23:16 UTC │ 19 Sep 25 23:16 UTC │
	│ pause   │ -p default-k8s-diff-port-149888 --alsologtostderr -v=1                                                                                                                         │ default-k8s-diff-port-149888 │ jenkins │ v1.37.0 │ 19 Sep 25 23:16 UTC │ 19 Sep 25 23:16 UTC │
	│ unpause │ -p default-k8s-diff-port-149888 --alsologtostderr -v=1                                                                                                                         │ default-k8s-diff-port-149888 │ jenkins │ v1.37.0 │ 19 Sep 25 23:16 UTC │ 19 Sep 25 23:16 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:15:32
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:15:32.908800  344703 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:15:32.909449  344703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:15:32.909464  344703 out.go:374] Setting ErrFile to fd 2...
	I0919 23:15:32.909471  344703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:15:32.909954  344703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 23:15:32.910752  344703 out.go:368] Setting JSON to false
	I0919 23:15:32.912137  344703 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7077,"bootTime":1758316656,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:15:32.912252  344703 start.go:140] virtualization: kvm guest
	I0919 23:15:32.916948  344703 out.go:179] * [enable-default-cni-896447] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:15:32.919033  344703 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:15:32.919086  344703 notify.go:220] Checking for updates...
	I0919 23:15:32.922145  344703 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:15:32.923531  344703 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:15:32.924966  344703 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 23:15:32.926438  344703 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:15:32.927884  344703 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:15:32.929707  344703 config.go:182] Loaded profile config "calico-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:15:32.929827  344703 config.go:182] Loaded profile config "custom-flannel-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:15:32.929910  344703 config.go:182] Loaded profile config "default-k8s-diff-port-149888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:15:32.930001  344703 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:15:32.959816  344703 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:15:32.959929  344703 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:15:33.025827  344703 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-19 23:15:33.014755271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:15:33.025932  344703 docker.go:318] overlay module found
	I0919 23:15:33.028149  344703 out.go:179] * Using the docker driver based on user configuration
	I0919 23:15:33.030391  344703 start.go:304] selected driver: docker
	I0919 23:15:33.030414  344703 start.go:918] validating driver "docker" against <nil>
	I0919 23:15:33.030429  344703 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:15:33.031103  344703 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:15:33.099704  344703 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-19 23:15:33.086770268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:15:33.099875  344703 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E0919 23:15:33.100105  344703 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0919 23:15:33.100139  344703 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:15:33.102689  344703 out.go:179] * Using Docker driver with root privileges
	I0919 23:15:33.104073  344703 cni.go:84] Creating CNI manager for "bridge"
	I0919 23:15:33.104098  344703 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 23:15:33.104233  344703 start.go:348] cluster config:
	{Name:enable-default-cni-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:enable-default-cni-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:15:33.105990  344703 out.go:179] * Starting "enable-default-cni-896447" primary control-plane node in "enable-default-cni-896447" cluster
	I0919 23:15:33.107421  344703 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 23:15:33.108906  344703 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:15:33.110129  344703 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:15:33.110189  344703 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 23:15:33.110205  344703 cache.go:58] Caching tarball of preloaded images
	I0919 23:15:33.110222  344703 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:15:33.110313  344703 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 23:15:33.110327  344703 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 23:15:33.110457  344703 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/config.json ...
	I0919 23:15:33.110493  344703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/config.json: {Name:mk6e5425dbce9e674a343695a2d11340896d365f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:33.133744  344703 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:15:33.133777  344703 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:15:33.133798  344703 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:15:33.133824  344703 start.go:360] acquireMachinesLock for enable-default-cni-896447: {Name:mkcab8753a56cfe000149c538617f5edcdeaefe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:15:33.133927  344703 start.go:364] duration metric: took 84.85µs to acquireMachinesLock for "enable-default-cni-896447"
	I0919 23:15:33.133951  344703 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:enable-default-cni-896447 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:15:33.134030  344703 start.go:125] createHost starting for "" (driver="docker")
	I0919 23:15:32.367449  340598 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-149888 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:15:32.388831  340598 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0919 23:15:32.393565  340598 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:15:32.406851  340598 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-149888 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-149888 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:15:32.406977  340598 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:15:32.407027  340598 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:15:32.444869  340598 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:15:32.444893  340598 containerd.go:534] Images already preloaded, skipping extraction
	I0919 23:15:32.444955  340598 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:15:32.482815  340598 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:15:32.482841  340598 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:15:32.482849  340598 kubeadm.go:926] updating node { 192.168.103.2 8444 v1.34.0 containerd true true} ...
	I0919 23:15:32.482961  340598 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-149888 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-149888 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:15:32.483028  340598 ssh_runner.go:195] Run: sudo crictl info
	I0919 23:15:32.526763  340598 cni.go:84] Creating CNI manager for ""
	I0919 23:15:32.526793  340598 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:15:32.526810  340598 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:15:32.526846  340598 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-149888 NodeName:default-k8s-diff-port-149888 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube
/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:15:32.527018  340598 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-149888"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:15:32.527102  340598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:15:32.537472  340598 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:15:32.537543  340598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:15:32.547973  340598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (333 bytes)
	I0919 23:15:32.569211  340598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:15:32.590634  340598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2243 bytes)
	I0919 23:15:32.613200  340598 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:15:32.617432  340598 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:15:32.632368  340598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:15:32.708208  340598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:15:32.734102  340598 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/default-k8s-diff-port-149888 for IP: 192.168.103.2
	I0919 23:15:32.734124  340598 certs.go:194] generating shared ca certs ...
	I0919 23:15:32.734146  340598 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:32.734309  340598 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 23:15:32.734359  340598 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 23:15:32.734374  340598 certs.go:256] generating profile certs ...
	I0919 23:15:32.734479  340598 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/default-k8s-diff-port-149888/client.key
	I0919 23:15:32.734563  340598 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/default-k8s-diff-port-149888/apiserver.key.404e604f
	I0919 23:15:32.734614  340598 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/default-k8s-diff-port-149888/proxy-client.key
	I0919 23:15:32.734752  340598 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 23:15:32.734799  340598 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 23:15:32.734813  340598 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 23:15:32.734849  340598 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:15:32.734883  340598 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:15:32.734916  340598 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 23:15:32.734974  340598 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:15:32.735654  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:15:32.765344  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:15:32.798571  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:15:32.837531  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:15:32.877303  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/default-k8s-diff-port-149888/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 23:15:32.908620  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/default-k8s-diff-port-149888/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:15:32.939351  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/default-k8s-diff-port-149888/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:15:32.971241  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/default-k8s-diff-port-149888/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:15:33.007252  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:15:33.038467  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 23:15:33.073713  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 23:15:33.104422  340598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:15:33.125651  340598 ssh_runner.go:195] Run: openssl version
	I0919 23:15:33.132270  340598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:15:33.143613  340598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:33.148448  340598 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:33.148517  340598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:33.156362  340598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:15:33.167285  340598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 23:15:33.179868  340598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 23:15:33.184437  340598 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 23:15:33.184506  340598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 23:15:33.192662  340598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 23:15:33.203725  340598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 23:15:33.214834  340598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 23:15:33.219590  340598 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 23:15:33.219658  340598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 23:15:33.229560  340598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:15:33.241012  340598 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:15:33.245355  340598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 23:15:33.253344  340598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 23:15:33.261198  340598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 23:15:33.269077  340598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 23:15:33.277486  340598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 23:15:33.285630  340598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 23:15:33.294490  340598 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-149888 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-149888 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:15:33.294594  340598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 23:15:33.294648  340598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:15:33.366308  340598 cri.go:89] found id: "8fe3c9f630050a4562bdee872e3bd5b158ebb872b819c9704b64439e00342d40"
	I0919 23:15:33.366380  340598 cri.go:89] found id: "cf6b3300eb0813fdae69407769c3f6c2a181ed057592256e5d0484216657585d"
	I0919 23:15:33.366398  340598 cri.go:89] found id: "351f4368e8712652bd68f0bd0ebb515c4f49fef1d60d7f5a8189bd9bb301dfa1"
	I0919 23:15:33.366412  340598 cri.go:89] found id: "fc26366126b18bc013992c759f1ace9b13c7b3a4d0bf6ba034cf10b8bc295925"
	I0919 23:15:33.366425  340598 cri.go:89] found id: "bbfb1c954fb1034180e24edeaa8f8df98c52266fc3bff9938f32230a087e7bf7"
	I0919 23:15:33.366438  340598 cri.go:89] found id: "c43b276ad64808b3638f48fb95a466e4ac5a6ca6b0f2e698462337fbab846497"
	I0919 23:15:33.366451  340598 cri.go:89] found id: "c2e3a7b89e4703676da0d2bd9bc89da04f199a71876c7e42f6ed8afbc9fd9473"
	I0919 23:15:33.366483  340598 cri.go:89] found id: "6cb08d2f210eda6eb6b104b96ac64e816b7fab2dd877c455b3d32f16fa032f13"
	I0919 23:15:33.366505  340598 cri.go:89] found id: ""
	I0919 23:15:33.366562  340598 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0919 23:15:33.392017  340598 cri.go:116] JSON = [{"ociVersion":"1.2.0","id":"e51828479176cf200621b9413a84782d1fd36bebb818cf4d308e648916728f07","pid":872,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e51828479176cf200621b9413a84782d1fd36bebb818cf4d308e648916728f07","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e51828479176cf200621b9413a84782d1fd36bebb818cf4d308e648916728f07/rootfs","created":"2025-09-19T23:15:33.381068889Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"e51828479176cf200621b9413a84782d1fd36bebb818cf4d308e648916728f07","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-diff-port-149888_8ea67c8a9090832adce3801a31c5da22","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-defaul
t-k8s-diff-port-149888","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8ea67c8a9090832adce3801a31c5da22"},"owner":"root"}]
	I0919 23:15:33.392106  340598 cri.go:126] list returned 1 containers
	I0919 23:15:33.392131  340598 cri.go:129] container: {ID:e51828479176cf200621b9413a84782d1fd36bebb818cf4d308e648916728f07 Status:created}
	I0919 23:15:33.392188  340598 cri.go:131] skipping e51828479176cf200621b9413a84782d1fd36bebb818cf4d308e648916728f07 - not in ps
	I0919 23:15:33.392249  340598 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:15:33.409003  340598 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 23:15:33.409030  340598 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 23:15:33.409092  340598 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 23:15:33.425910  340598 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 23:15:33.426750  340598 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-149888" does not appear in /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:15:33.427828  340598 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14678/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-149888" cluster setting kubeconfig missing "default-k8s-diff-port-149888" context setting]
	I0919 23:15:33.428991  340598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:33.431337  340598 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 23:15:33.447295  340598 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.103.2
	I0919 23:15:33.447376  340598 kubeadm.go:593] duration metric: took 38.338055ms to restartPrimaryControlPlane
	I0919 23:15:33.447407  340598 kubeadm.go:394] duration metric: took 152.910304ms to StartCluster
	I0919 23:15:33.447429  340598 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:33.447536  340598 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:15:33.448796  340598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:33.450821  340598 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:15:33.451317  340598 config.go:182] Loaded profile config "default-k8s-diff-port-149888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:15:33.451018  340598 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:15:33.451558  340598 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-149888"
	I0919 23:15:33.451588  340598 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-149888"
	W0919 23:15:33.451601  340598 addons.go:247] addon storage-provisioner should already be in state true
	I0919 23:15:33.451632  340598 host.go:66] Checking if "default-k8s-diff-port-149888" exists ...
	I0919 23:15:33.452146  340598 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:15:33.452347  340598 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-149888"
	I0919 23:15:33.452369  340598 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-149888"
	I0919 23:15:33.452398  340598 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-149888"
	I0919 23:15:33.452610  340598 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-149888"
	I0919 23:15:33.452634  340598 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-149888"
	W0919 23:15:33.452644  340598 addons.go:247] addon metrics-server should already be in state true
	I0919 23:15:33.452665  340598 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:15:33.452687  340598 host.go:66] Checking if "default-k8s-diff-port-149888" exists ...
	I0919 23:15:33.452795  340598 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-149888"
	W0919 23:15:33.452812  340598 addons.go:247] addon dashboard should already be in state true
	I0919 23:15:33.452835  340598 out.go:179] * Verifying Kubernetes components...
	I0919 23:15:33.452844  340598 host.go:66] Checking if "default-k8s-diff-port-149888" exists ...
	I0919 23:15:33.453127  340598 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:15:33.453439  340598 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:15:33.454721  340598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:15:33.492811  340598 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 23:15:33.493845  340598 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-149888"
	W0919 23:15:33.493870  340598 addons.go:247] addon default-storageclass should already be in state true
	I0919 23:15:33.493899  340598 host.go:66] Checking if "default-k8s-diff-port-149888" exists ...
	I0919 23:15:33.494576  340598 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 23:15:33.494869  340598 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 23:15:33.494962  340598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-149888
	I0919 23:15:33.495121  340598 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:15:33.496492  340598 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0919 23:15:33.498336  340598 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0919 23:15:33.499991  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0919 23:15:33.500011  340598 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0919 23:15:33.500072  340598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-149888
	I0919 23:15:33.506383  340598 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:15:30.071091  337922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.crt.e8405b3a ...
	I0919 23:15:30.071121  337922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.crt.e8405b3a: {Name:mkfc6d9fb70774e93edea0f30068f954d770e855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:30.071305  337922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.key.e8405b3a ...
	I0919 23:15:30.071323  337922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.key.e8405b3a: {Name:mkf2bc5573d0fadb539f07c387914ddabda7e1e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:30.071429  337922 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.crt.e8405b3a -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.crt
	I0919 23:15:30.071553  337922 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.key.e8405b3a -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.key
	I0919 23:15:30.071647  337922 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/proxy-client.key
	I0919 23:15:30.071669  337922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/proxy-client.crt with IP's: []
	I0919 23:15:30.329852  337922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/proxy-client.crt ...
	I0919 23:15:30.329880  337922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/proxy-client.crt: {Name:mkf4ed8753967f71e4fee5b648f600e5521ad677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:30.330033  337922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/proxy-client.key ...
	I0919 23:15:30.330046  337922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/proxy-client.key: {Name:mk83701e5fc5bd0781830011816d1b3c9031d60f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:30.330250  337922 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 23:15:30.330300  337922 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 23:15:30.330314  337922 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 23:15:30.330345  337922 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:15:30.330382  337922 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:15:30.330412  337922 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 23:15:30.330482  337922 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:15:30.331252  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:15:30.362599  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:15:30.391704  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:15:30.421576  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:15:30.450400  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 23:15:30.480058  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:15:30.508820  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:15:30.537403  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:15:30.566342  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 23:15:30.602719  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:15:30.633417  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 23:15:30.667352  337922 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:15:30.702068  337922 ssh_runner.go:195] Run: openssl version
	I0919 23:15:30.715192  337922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 23:15:30.729426  337922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 23:15:30.734175  337922 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 23:15:30.734234  337922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 23:15:30.742314  337922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:15:30.754034  337922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:15:30.766190  337922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:30.770790  337922 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:30.770854  337922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:30.779014  337922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:15:30.791960  337922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 23:15:30.804248  337922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 23:15:30.809303  337922 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 23:15:30.809370  337922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 23:15:30.819118  337922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 23:15:30.833413  337922 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:15:30.837665  337922 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:15:30.837733  337922 kubeadm.go:392] StartCluster: {Name:custom-flannel-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:custom-flannel-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:15:30.837821  337922 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 23:15:30.837894  337922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:15:30.882097  337922 cri.go:89] found id: ""
	I0919 23:15:30.882214  337922 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:15:30.892431  337922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:15:30.902347  337922 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:15:30.902399  337922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:15:30.912724  337922 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:15:30.912748  337922 kubeadm.go:157] found existing configuration files:
	
	I0919 23:15:30.912797  337922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:15:30.922795  337922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:15:30.922860  337922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:15:30.933361  337922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:15:30.943701  337922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:15:30.943770  337922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:15:30.954343  337922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:15:30.964250  337922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:15:30.964301  337922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:15:30.974023  337922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:15:30.984022  337922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:15:30.984084  337922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:15:30.994536  337922 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:15:31.056589  337922 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:15:31.120263  337922 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:15:33.509955  340598 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:15:33.509979  340598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:15:33.510053  340598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-149888
	I0919 23:15:33.535133  340598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/default-k8s-diff-port-149888/id_rsa Username:docker}
	I0919 23:15:33.541327  340598 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:15:33.541357  340598 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:15:33.541425  340598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-149888
	I0919 23:15:33.549091  340598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/default-k8s-diff-port-149888/id_rsa Username:docker}
	I0919 23:15:33.557730  340598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/default-k8s-diff-port-149888/id_rsa Username:docker}
	I0919 23:15:33.570282  340598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/default-k8s-diff-port-149888/id_rsa Username:docker}
	I0919 23:15:33.637116  340598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:15:33.670007  340598 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-149888" to be "Ready" ...
	I0919 23:15:33.705757  340598 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 23:15:33.705784  340598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 23:15:33.708629  340598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:15:33.711304  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0919 23:15:33.711377  340598 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0919 23:15:33.711803  340598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:15:33.765034  340598 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 23:15:33.765058  340598 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 23:15:33.792079  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0919 23:15:33.792113  340598 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0919 23:15:33.836250  340598 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:15:33.836277  340598 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 23:15:33.855571  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0919 23:15:33.855625  340598 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0919 23:15:33.876889  340598 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:15:33.876929  340598 retry.go:31] will retry after 335.346116ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:15:33.895379  340598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:15:33.921949  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0919 23:15:33.921980  340598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0919 23:15:33.952086  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0919 23:15:33.952110  340598 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0919 23:15:33.980888  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0919 23:15:33.980911  340598 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0919 23:15:34.009150  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0919 23:15:34.009201  340598 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0919 23:15:34.036022  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0919 23:15:34.036047  340598 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0919 23:15:34.059314  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:15:34.059340  340598 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0919 23:15:34.084023  340598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:15:34.212579  340598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:15:36.139946  340598 node_ready.go:49] node "default-k8s-diff-port-149888" is "Ready"
	I0919 23:15:36.139981  340598 node_ready.go:38] duration metric: took 2.469918832s for node "default-k8s-diff-port-149888" to be "Ready" ...
	I0919 23:15:36.139998  340598 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:15:36.140070  340598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:15:37.048579  340598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.3367104s)
	I0919 23:15:37.141561  340598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.246135134s)
	I0919 23:15:37.141615  340598 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-149888"
	I0919 23:15:37.747829  340598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.663746718s)
	I0919 23:15:37.747891  340598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (3.535272195s)
	I0919 23:15:37.747945  340598 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.607851251s)
	I0919 23:15:37.747974  340598 api_server.go:72] duration metric: took 4.297109416s to wait for apiserver process to appear ...
	I0919 23:15:37.747982  340598 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:15:37.748003  340598 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0919 23:15:37.752423  340598 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:15:37.752453  340598 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:15:37.777526  340598 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-149888 addons enable metrics-server
	
	I0919 23:15:33.136328  344703 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 23:15:33.136572  344703 start.go:159] libmachine.API.Create for "enable-default-cni-896447" (driver="docker")
	I0919 23:15:33.136603  344703 client.go:168] LocalClient.Create starting
	I0919 23:15:33.136657  344703 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 23:15:33.136694  344703 main.go:141] libmachine: Decoding PEM data...
	I0919 23:15:33.136708  344703 main.go:141] libmachine: Parsing certificate...
	I0919 23:15:33.136769  344703 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 23:15:33.136791  344703 main.go:141] libmachine: Decoding PEM data...
	I0919 23:15:33.136801  344703 main.go:141] libmachine: Parsing certificate...
	I0919 23:15:33.137109  344703 cli_runner.go:164] Run: docker network inspect enable-default-cni-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 23:15:33.156264  344703 cli_runner.go:211] docker network inspect enable-default-cni-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 23:15:33.156338  344703 network_create.go:284] running [docker network inspect enable-default-cni-896447] to gather additional debugging logs...
	I0919 23:15:33.156362  344703 cli_runner.go:164] Run: docker network inspect enable-default-cni-896447
	W0919 23:15:33.177325  344703 cli_runner.go:211] docker network inspect enable-default-cni-896447 returned with exit code 1
	I0919 23:15:33.177361  344703 network_create.go:287] error running [docker network inspect enable-default-cni-896447]: docker network inspect enable-default-cni-896447: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-896447 not found
	I0919 23:15:33.177389  344703 network_create.go:289] output of [docker network inspect enable-default-cni-896447]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-896447 not found
	
	** /stderr **
	I0919 23:15:33.177576  344703 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:15:33.198411  344703 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-465af21e2d8d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:b5:e0:20:10:48} reservation:<nil>}
	I0919 23:15:33.198985  344703 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd9dd989b948 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:ff:32:51:a1:28} reservation:<nil>}
	I0919 23:15:33.199671  344703 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d5cd7c460be9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:d8:0f:d3:49:b9} reservation:<nil>}
	I0919 23:15:33.200592  344703 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b5faa0}
	I0919 23:15:33.200640  344703 network_create.go:124] attempt to create docker network enable-default-cni-896447 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0919 23:15:33.200698  344703 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-896447 enable-default-cni-896447
	I0919 23:15:33.267497  344703 network_create.go:108] docker network enable-default-cni-896447 192.168.76.0/24 created
	I0919 23:15:33.267532  344703 kic.go:121] calculated static IP "192.168.76.2" for the "enable-default-cni-896447" container
	I0919 23:15:33.267604  344703 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 23:15:33.290626  344703 cli_runner.go:164] Run: docker volume create enable-default-cni-896447 --label name.minikube.sigs.k8s.io=enable-default-cni-896447 --label created_by.minikube.sigs.k8s.io=true
	I0919 23:15:33.317669  344703 oci.go:103] Successfully created a docker volume enable-default-cni-896447
	I0919 23:15:33.317796  344703 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-896447-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-896447 --entrypoint /usr/bin/test -v enable-default-cni-896447:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 23:15:33.931755  344703 oci.go:107] Successfully prepared a docker volume enable-default-cni-896447
	I0919 23:15:33.931796  344703 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:15:33.931818  344703 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 23:15:33.931893  344703 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-896447:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 23:15:37.942618  340598 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0919 23:15:37.985625  340598 addons.go:514] duration metric: took 4.534584363s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0919 23:15:38.248802  340598 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0919 23:15:38.253533  340598 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:15:38.253564  340598 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:15:38.748181  340598 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0919 23:15:38.752478  340598 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:15:38.752549  340598 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:15:39.248627  340598 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0919 23:15:39.255957  340598 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:15:39.255985  340598 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:15:39.748144  340598 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0919 23:15:39.756623  340598 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0919 23:15:39.758067  340598 api_server.go:141] control plane version: v1.34.0
	I0919 23:15:39.758094  340598 api_server.go:131] duration metric: took 2.010104144s to wait for apiserver health ...
	I0919 23:15:39.758104  340598 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:15:39.767321  340598 system_pods.go:59] 9 kube-system pods found
	I0919 23:15:39.767423  340598 system_pods.go:61] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:39.767447  340598 system_pods.go:61] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:39.767496  340598 system_pods.go:61] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 23:15:39.767522  340598 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:39.767542  340598 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:15:39.767577  340598 system_pods.go:61] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:39.767587  340598 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:39.767595  340598 system_pods.go:61] "metrics-server-746fcd58dc-hskrc" [40d8858a-a2a6-4ecb-a444-fc51fc311b46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:15:39.767602  340598 system_pods.go:61] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:15:39.767609  340598 system_pods.go:74] duration metric: took 9.499039ms to wait for pod list to return data ...
	I0919 23:15:39.767619  340598 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:15:39.773053  340598 default_sa.go:45] found service account: "default"
	I0919 23:15:39.773192  340598 default_sa.go:55] duration metric: took 5.56416ms for default service account to be created ...
	I0919 23:15:39.773230  340598 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:15:39.777639  340598 system_pods.go:86] 9 kube-system pods found
	I0919 23:15:39.777729  340598 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:39.777742  340598 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:39.777751  340598 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 23:15:39.777761  340598 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:39.777769  340598 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:15:39.777778  340598 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:39.777786  340598 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:39.777793  340598 system_pods.go:89] "metrics-server-746fcd58dc-hskrc" [40d8858a-a2a6-4ecb-a444-fc51fc311b46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:15:39.777800  340598 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:15:39.777809  340598 system_pods.go:126] duration metric: took 4.571567ms to wait for k8s-apps to be running ...
	I0919 23:15:39.777819  340598 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:15:39.777867  340598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:15:39.800350  340598 system_svc.go:56] duration metric: took 22.521873ms WaitForService to wait for kubelet
	I0919 23:15:39.800395  340598 kubeadm.go:578] duration metric: took 6.349514332s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:15:39.800416  340598 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:15:39.804858  340598 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 23:15:39.804890  340598 node_conditions.go:123] node cpu capacity is 8
	I0919 23:15:39.804909  340598 node_conditions.go:105] duration metric: took 4.487437ms to run NodePressure ...
	I0919 23:15:39.804923  340598 start.go:241] waiting for startup goroutines ...
	I0919 23:15:39.804931  340598 start.go:246] waiting for cluster config update ...
	I0919 23:15:39.804946  340598 start.go:255] writing updated cluster config ...
	I0919 23:15:39.805410  340598 ssh_runner.go:195] Run: rm -f paused
	I0919 23:15:39.811083  340598 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:15:39.819548  340598 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qj565" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:15:38.942431  344703 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-896447:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (5.010480573s)
	I0919 23:15:38.942491  344703 kic.go:203] duration metric: took 5.01066921s to extract preloaded images to volume ...
	W0919 23:15:38.942626  344703 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 23:15:38.942665  344703 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 23:15:38.942818  344703 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 23:15:39.062635  344703 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-896447 --name enable-default-cni-896447 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-896447 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-896447 --network enable-default-cni-896447 --ip 192.168.76.2 --volume enable-default-cni-896447:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 23:15:39.596204  344703 cli_runner.go:164] Run: docker container inspect enable-default-cni-896447 --format={{.State.Running}}
	I0919 23:15:39.624427  344703 cli_runner.go:164] Run: docker container inspect enable-default-cni-896447 --format={{.State.Status}}
	I0919 23:15:39.651352  344703 cli_runner.go:164] Run: docker exec enable-default-cni-896447 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:15:39.725132  344703 oci.go:144] the created container "enable-default-cni-896447" has a running status.
	I0919 23:15:39.725203  344703 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/enable-default-cni-896447/id_rsa...
	I0919 23:15:40.104143  344703 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/enable-default-cni-896447/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:15:40.147711  344703 cli_runner.go:164] Run: docker container inspect enable-default-cni-896447 --format={{.State.Status}}
	I0919 23:15:40.184441  344703 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:15:40.184472  344703 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-896447 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:15:40.258705  344703 cli_runner.go:164] Run: docker container inspect enable-default-cni-896447 --format={{.State.Status}}
	I0919 23:15:40.289562  344703 machine.go:93] provisionDockerMachine start ...
	I0919 23:15:40.289769  344703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-896447
	I0919 23:15:40.323022  344703 main.go:141] libmachine: Using SSH client type: native
	I0919 23:15:40.323586  344703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0919 23:15:40.323603  344703 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:15:40.495578  344703 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-896447
	
	I0919 23:15:40.495610  344703 ubuntu.go:182] provisioning hostname "enable-default-cni-896447"
	I0919 23:15:40.495703  344703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-896447
	I0919 23:15:40.521149  344703 main.go:141] libmachine: Using SSH client type: native
	I0919 23:15:40.521505  344703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0919 23:15:40.521526  344703 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-896447 && echo "enable-default-cni-896447" | sudo tee /etc/hostname
	I0919 23:15:40.687047  344703 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-896447
	
	I0919 23:15:40.687219  344703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-896447
	I0919 23:15:40.712076  344703 main.go:141] libmachine: Using SSH client type: native
	I0919 23:15:40.712364  344703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0919 23:15:40.712392  344703 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-896447' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-896447/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-896447' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:15:40.864360  344703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:15:40.864399  344703 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 23:15:40.864443  344703 ubuntu.go:190] setting up certificates
	I0919 23:15:40.864456  344703 provision.go:84] configureAuth start
	I0919 23:15:40.864515  344703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-896447
	I0919 23:15:40.886435  344703 provision.go:143] copyHostCerts
	I0919 23:15:40.886496  344703 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 23:15:40.886504  344703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 23:15:40.886569  344703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 23:15:40.886684  344703 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 23:15:40.886697  344703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 23:15:40.886731  344703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 23:15:40.886829  344703 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 23:15:40.886837  344703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 23:15:40.886872  344703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 23:15:40.886965  344703 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-896447 san=[127.0.0.1 192.168.76.2 enable-default-cni-896447 localhost minikube]
	I0919 23:15:41.136936  344703 provision.go:177] copyRemoteCerts
	I0919 23:15:41.137035  344703 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:15:41.137087  344703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-896447
	I0919 23:15:41.157567  344703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/enable-default-cni-896447/id_rsa Username:docker}
	I0919 23:15:41.265236  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 23:15:41.303097  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0919 23:15:41.334135  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:15:41.369576  344703 provision.go:87] duration metric: took 505.106ms to configureAuth
	I0919 23:15:41.369608  344703 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:15:41.369841  344703 config.go:182] Loaded profile config "enable-default-cni-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:15:41.369851  344703 machine.go:96] duration metric: took 1.080192174s to provisionDockerMachine
	I0919 23:15:41.369859  344703 client.go:171] duration metric: took 8.2332502s to LocalClient.Create
	I0919 23:15:41.369882  344703 start.go:167] duration metric: took 8.233310858s to libmachine.API.Create "enable-default-cni-896447"
	I0919 23:15:41.369890  344703 start.go:293] postStartSetup for "enable-default-cni-896447" (driver="docker")
	I0919 23:15:41.369904  344703 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:15:41.369967  344703 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:15:41.370011  344703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-896447
	I0919 23:15:41.399243  344703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/enable-default-cni-896447/id_rsa Username:docker}
	I0919 23:15:41.511990  344703 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:15:41.518841  344703 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:15:41.518868  344703 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:15:41.518881  344703 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:15:41.518887  344703 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:15:41.518898  344703 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 23:15:41.518945  344703 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 23:15:41.519016  344703 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 23:15:41.519103  344703 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:15:41.539376  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:15:41.590663  344703 start.go:296] duration metric: took 220.753633ms for postStartSetup
	I0919 23:15:41.591115  344703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-896447
	I0919 23:15:41.612462  344703 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/config.json ...
	I0919 23:15:41.612743  344703 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:15:41.612787  344703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-896447
	I0919 23:15:41.634405  344703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/enable-default-cni-896447/id_rsa Username:docker}
	I0919 23:15:41.732247  344703 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:15:41.739671  344703 start.go:128] duration metric: took 8.605598386s to createHost
	I0919 23:15:41.739701  344703 start.go:83] releasing machines lock for "enable-default-cni-896447", held for 8.605762338s
	I0919 23:15:41.739807  344703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-896447
	I0919 23:15:41.768142  344703 ssh_runner.go:195] Run: cat /version.json
	I0919 23:15:41.768227  344703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-896447
	I0919 23:15:41.768250  344703 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:15:41.768312  344703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-896447
	I0919 23:15:41.795698  344703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/enable-default-cni-896447/id_rsa Username:docker}
	I0919 23:15:41.795853  344703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/enable-default-cni-896447/id_rsa Username:docker}
	I0919 23:15:42.005044  344703 ssh_runner.go:195] Run: systemctl --version
	I0919 23:15:42.010329  344703 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:15:42.015411  344703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:15:42.050101  344703 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:15:42.050203  344703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:15:42.085998  344703 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:15:42.086026  344703 start.go:495] detecting cgroup driver to use...
	I0919 23:15:42.086064  344703 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:15:42.086118  344703 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 23:15:42.101407  344703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:15:42.117471  344703 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:15:42.117534  344703 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:15:42.132604  344703 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:15:42.148842  344703 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:15:42.251216  344703 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:15:42.333460  344703 docker.go:234] disabling docker service ...
	I0919 23:15:42.333589  344703 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:15:42.355374  344703 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:15:42.372177  344703 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:15:42.451512  344703 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:15:42.533323  344703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:15:42.551510  344703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:15:42.578571  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:15:42.595946  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:15:42.615207  344703 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:15:42.615419  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:15:42.633805  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:15:42.648976  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:15:42.667228  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:15:42.682617  344703 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:15:42.696113  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:15:42.709911  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:15:42.724511  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:15:42.739136  344703 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:15:42.752291  344703 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:15:42.764737  344703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:15:42.856585  344703 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:15:43.013637  344703 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 23:15:43.013734  344703 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 23:15:43.019593  344703 start.go:563] Will wait 60s for crictl version
	I0919 23:15:43.019664  344703 ssh_runner.go:195] Run: which crictl
	I0919 23:15:43.025226  344703 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:15:43.082693  344703 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 23:15:43.082822  344703 ssh_runner.go:195] Run: containerd --version
	I0919 23:15:43.122943  344703 ssh_runner.go:195] Run: containerd --version
	I0919 23:15:43.160753  344703 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 23:15:45.444732  337922 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:15:45.444807  337922 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:15:45.444930  337922 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:15:45.445030  337922 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:15:45.445115  337922 kubeadm.go:310] OS: Linux
	I0919 23:15:45.445229  337922 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:15:45.445299  337922 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:15:45.445359  337922 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:15:45.445468  337922 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:15:45.445547  337922 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:15:45.445632  337922 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:15:45.445715  337922 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:15:45.445805  337922 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:15:45.445916  337922 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:15:45.446121  337922 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:15:45.446274  337922 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:15:45.446373  337922 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:15:45.450361  337922 out.go:252]   - Generating certificates and keys ...
	I0919 23:15:45.450567  337922 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:15:45.450684  337922 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:15:45.450781  337922 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:15:45.450855  337922 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:15:45.450932  337922 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:15:45.451004  337922 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:15:45.451067  337922 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:15:45.451364  337922 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-896447 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0919 23:15:45.451462  337922 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:15:45.451660  337922 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-896447 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0919 23:15:45.451755  337922 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:15:45.451843  337922 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:15:45.451905  337922 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:15:45.451989  337922 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:15:45.452057  337922 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:15:45.452136  337922 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:15:45.452258  337922 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:15:45.452362  337922 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:15:45.452481  337922 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:15:45.452599  337922 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:15:45.452710  337922 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:15:45.454716  337922 out.go:252]   - Booting up control plane ...
	I0919 23:15:45.454814  337922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:15:45.454910  337922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:15:45.455028  337922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:15:45.455208  337922 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:15:45.455333  337922 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:15:45.455492  337922 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:15:45.455622  337922 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:15:45.455680  337922 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:15:45.455872  337922 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:15:45.456030  337922 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:15:45.456117  337922 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501123072s
	I0919 23:15:45.456251  337922 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:15:45.456321  337922 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0919 23:15:45.456482  337922 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:15:45.456610  337922 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:15:45.456748  337922 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.504380857s
	I0919 23:15:45.456870  337922 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.069098705s
	I0919 23:15:45.456967  337922 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.502775889s
	I0919 23:15:45.457113  337922 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:15:45.457303  337922 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:15:45.457393  337922 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:15:45.457756  337922 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-896447 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:15:45.457869  337922 kubeadm.go:310] [bootstrap-token] Using token: ldywn1.hhm1ey7n54hgdxgs
	I0919 23:15:45.460976  337922 out.go:252]   - Configuring RBAC rules ...
	I0919 23:15:45.461111  337922 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:15:45.461270  337922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:15:45.461689  337922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:15:45.461873  337922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:15:45.462030  337922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:15:45.462302  337922 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:15:45.462519  337922 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:15:45.462596  337922 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:15:45.462668  337922 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:15:45.462681  337922 kubeadm.go:310] 
	I0919 23:15:45.462782  337922 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:15:45.462793  337922 kubeadm.go:310] 
	I0919 23:15:45.462910  337922 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:15:45.462922  337922 kubeadm.go:310] 
	I0919 23:15:45.462959  337922 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:15:45.463052  337922 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:15:45.463193  337922 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:15:45.463222  337922 kubeadm.go:310] 
	I0919 23:15:45.463324  337922 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:15:45.463348  337922 kubeadm.go:310] 
	I0919 23:15:45.463422  337922 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:15:45.463433  337922 kubeadm.go:310] 
	I0919 23:15:45.463500  337922 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:15:45.463616  337922 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:15:45.463713  337922 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:15:45.463746  337922 kubeadm.go:310] 
	I0919 23:15:45.463858  337922 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:15:45.463973  337922 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:15:45.463978  337922 kubeadm.go:310] 
	I0919 23:15:45.464088  337922 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ldywn1.hhm1ey7n54hgdxgs \
	I0919 23:15:45.464244  337922 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 \
	I0919 23:15:45.464283  337922 kubeadm.go:310] 	--control-plane 
	I0919 23:15:45.464308  337922 kubeadm.go:310] 
	I0919 23:15:45.464454  337922 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:15:45.464478  337922 kubeadm.go:310] 
	I0919 23:15:45.464599  337922 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ldywn1.hhm1ey7n54hgdxgs \
	I0919 23:15:45.464766  337922 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 
	I0919 23:15:45.464797  337922 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0919 23:15:45.470347  337922 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	W0919 23:15:41.828842  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:15:44.326627  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	I0919 23:15:42.822304  326932 system_pods.go:86] 9 kube-system pods found
	I0919 23:15:42.822344  326932 system_pods.go:89] "calico-kube-controllers-59556d9b4c-2fjg9" [8dc0b3b6-f557-4ad1-85ae-60ddc48950a7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0919 23:15:42.822357  326932 system_pods.go:89] "calico-node-r2l7z" [ebd611c4-409f-4cc4-8508-4c5892b758d8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0919 23:15:42.822366  326932 system_pods.go:89] "coredns-66bc5c9577-dldb4" [a59ee5bf-734d-4164-896a-639bb683ff7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:42.822372  326932 system_pods.go:89] "etcd-calico-896447" [41123509-0390-4020-808b-bce50ffb6c4f] Running
	I0919 23:15:42.822381  326932 system_pods.go:89] "kube-apiserver-calico-896447" [25b92dfb-471c-422d-998e-cde4285e1283] Running
	I0919 23:15:42.822387  326932 system_pods.go:89] "kube-controller-manager-calico-896447" [42b190c3-2f24-4dd1-8b90-58c440546dc7] Running
	I0919 23:15:42.822394  326932 system_pods.go:89] "kube-proxy-fwxkb" [634c6d7b-ea71-4998-b7ac-c792099dadd4] Running
	I0919 23:15:42.822399  326932 system_pods.go:89] "kube-scheduler-calico-896447" [e005a269-f2a5-41f2-b518-57da14a4830f] Running
	I0919 23:15:42.822404  326932 system_pods.go:89] "storage-provisioner" [0a6daeb2-d50a-4464-86cc-1f501ec95e29] Running
	I0919 23:15:42.822427  326932 retry.go:31] will retry after 11.166139539s: missing components: kube-dns
	I0919 23:15:43.162717  344703 cli_runner.go:164] Run: docker network inspect enable-default-cni-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:15:43.189336  344703 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0919 23:15:43.195787  344703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:15:43.215896  344703 kubeadm.go:875] updating cluster {Name:enable-default-cni-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:enable-default-cni-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:15:43.216029  344703 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:15:43.216092  344703 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:15:43.274643  344703 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:15:43.274669  344703 containerd.go:534] Images already preloaded, skipping extraction
	I0919 23:15:43.274857  344703 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:15:43.329488  344703 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:15:43.329511  344703 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:15:43.329522  344703 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 containerd true true} ...
	I0919 23:15:43.329701  344703 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-896447 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:enable-default-cni-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0919 23:15:43.329783  344703 ssh_runner.go:195] Run: sudo crictl info
	I0919 23:15:43.388791  344703 cni.go:84] Creating CNI manager for "bridge"
	I0919 23:15:43.388827  344703 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:15:43.388861  344703 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-896447 NodeName:enable-default-cni-896447 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:15:43.389031  344703 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "enable-default-cni-896447"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:15:43.389120  344703 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:15:43.407392  344703 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:15:43.407477  344703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:15:43.422344  344703 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I0919 23:15:43.454990  344703 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:15:43.490771  344703 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I0919 23:15:43.519250  344703 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:15:43.525355  344703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:15:43.543972  344703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:15:43.643376  344703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:15:43.668514  344703 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447 for IP: 192.168.76.2
	I0919 23:15:43.668538  344703 certs.go:194] generating shared ca certs ...
	I0919 23:15:43.668557  344703 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:43.668717  344703 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 23:15:43.668774  344703 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 23:15:43.668789  344703 certs.go:256] generating profile certs ...
	I0919 23:15:43.668860  344703 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/client.key
	I0919 23:15:43.668875  344703 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/client.crt with IP's: []
	I0919 23:15:43.805813  344703 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/client.crt ...
	I0919 23:15:43.805853  344703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/client.crt: {Name:mk0464640540612b6e74686b161438202613fde1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:43.806051  344703 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/client.key ...
	I0919 23:15:43.806068  344703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/client.key: {Name:mk367a6ea97357b56137acda36c4237b57e3c702 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:43.806203  344703 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.key.6c00c855
	I0919 23:15:43.806238  344703 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.crt.6c00c855 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0919 23:15:44.175596  344703 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.crt.6c00c855 ...
	I0919 23:15:44.175631  344703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.crt.6c00c855: {Name:mk4630c6f3a2421136b44dcafd50c85aef43ff7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:44.175790  344703 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.key.6c00c855 ...
	I0919 23:15:44.175807  344703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.key.6c00c855: {Name:mkb0e04733d4a30de7229c750f6ce228e8f90973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:44.175905  344703 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.crt.6c00c855 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.crt
	I0919 23:15:44.175984  344703 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.key.6c00c855 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.key
	I0919 23:15:44.176039  344703 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/proxy-client.key
	I0919 23:15:44.176054  344703 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/proxy-client.crt with IP's: []
	I0919 23:15:44.350228  344703 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/proxy-client.crt ...
	I0919 23:15:44.350258  344703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/proxy-client.crt: {Name:mk72ed81b89d754d3b39a97ac213a8202ef5300b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:44.350416  344703 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/proxy-client.key ...
	I0919 23:15:44.350430  344703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/proxy-client.key: {Name:mk6e74cf487510ceb651f1076f9f57fc7e73562b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:44.350629  344703 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 23:15:44.350679  344703 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 23:15:44.350696  344703 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 23:15:44.350728  344703 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:15:44.350763  344703 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:15:44.350810  344703 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 23:15:44.350869  344703 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:15:44.351521  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:15:44.383087  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:15:44.416676  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:15:44.449964  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:15:44.488929  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 23:15:44.521668  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 23:15:44.553975  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:15:44.593411  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 23:15:44.636337  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 23:15:44.679488  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 23:15:44.722503  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:15:44.763203  344703 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:15:44.791868  344703 ssh_runner.go:195] Run: openssl version
	I0919 23:15:44.799596  344703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 23:15:44.813637  344703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 23:15:44.820636  344703 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 23:15:44.820916  344703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 23:15:44.832739  344703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 23:15:44.848935  344703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 23:15:44.862800  344703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 23:15:44.868564  344703 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 23:15:44.868630  344703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 23:15:44.880061  344703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:15:44.896920  344703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:15:44.911404  344703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:44.916561  344703 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:44.916629  344703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:44.926128  344703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:15:44.943980  344703 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:15:44.948868  344703 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:15:44.948937  344703 kubeadm.go:392] StartCluster: {Name:enable-default-cni-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:enable-default-cni-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:15:44.949042  344703 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 23:15:44.949104  344703 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:15:45.001452  344703 cri.go:89] found id: ""
	I0919 23:15:45.001528  344703 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:15:45.015533  344703 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:15:45.027589  344703 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:15:45.027666  344703 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:15:45.039762  344703 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:15:45.039785  344703 kubeadm.go:157] found existing configuration files:
	
	I0919 23:15:45.039849  344703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:15:45.052324  344703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:15:45.052394  344703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:15:45.065237  344703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:15:45.077902  344703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:15:45.077964  344703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:15:45.090511  344703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:15:45.102541  344703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:15:45.102609  344703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:15:45.115026  344703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:15:45.127298  344703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:15:45.127351  344703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:15:45.138670  344703 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:15:45.208420  344703 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:15:45.286259  344703 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:15:45.472227  337922 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 23:15:45.472300  337922 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0919 23:15:45.476858  337922 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0919 23:15:45.476893  337922 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0919 23:15:45.506768  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 23:15:46.125044  337922 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:15:46.125137  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-896447 minikube.k8s.io/updated_at=2025_09_19T23_15_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=custom-flannel-896447 minikube.k8s.io/primary=true
	I0919 23:15:46.125211  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:46.137355  337922 ops.go:34] apiserver oom_adj: -16
	I0919 23:15:46.233475  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:46.734506  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:47.234420  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:47.734419  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:48.234401  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:48.734132  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:49.233737  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:49.734208  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:50.234052  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:50.311260  337922 kubeadm.go:1105] duration metric: took 4.186159034s to wait for elevateKubeSystemPrivileges
	I0919 23:15:50.311296  337922 kubeadm.go:394] duration metric: took 19.473567437s to StartCluster
	I0919 23:15:50.311318  337922 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:50.311422  337922 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:15:50.312701  337922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:50.312969  337922 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:15:50.312988  337922 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:15:50.313044  337922 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:15:50.313169  337922 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-896447"
	I0919 23:15:50.313175  337922 config.go:182] Loaded profile config "custom-flannel-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:15:50.313190  337922 addons.go:238] Setting addon storage-provisioner=true in "custom-flannel-896447"
	I0919 23:15:50.313223  337922 host.go:66] Checking if "custom-flannel-896447" exists ...
	I0919 23:15:50.313214  337922 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-896447"
	I0919 23:15:50.313249  337922 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-896447"
	I0919 23:15:50.313653  337922 cli_runner.go:164] Run: docker container inspect custom-flannel-896447 --format={{.State.Status}}
	I0919 23:15:50.313830  337922 cli_runner.go:164] Run: docker container inspect custom-flannel-896447 --format={{.State.Status}}
	I0919 23:15:50.315818  337922 out.go:179] * Verifying Kubernetes components...
	I0919 23:15:50.318504  337922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:15:50.344092  337922 addons.go:238] Setting addon default-storageclass=true in "custom-flannel-896447"
	I0919 23:15:50.344212  337922 host.go:66] Checking if "custom-flannel-896447" exists ...
	I0919 23:15:50.344800  337922 cli_runner.go:164] Run: docker container inspect custom-flannel-896447 --format={{.State.Status}}
	I0919 23:15:50.346489  337922 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0919 23:15:46.825866  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:15:48.826707  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:15:50.828329  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	I0919 23:15:50.348318  337922 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:15:50.348346  337922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:15:50.348413  337922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-896447
	I0919 23:15:50.374731  337922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/custom-flannel-896447/id_rsa Username:docker}
	I0919 23:15:50.382128  337922 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:15:50.382393  337922 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:15:50.382801  337922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-896447
	I0919 23:15:50.411174  337922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/custom-flannel-896447/id_rsa Username:docker}
	I0919 23:15:50.432116  337922 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:15:50.496285  337922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:15:50.568253  337922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:15:50.571983  337922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:15:50.716245  337922 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0919 23:15:50.717725  337922 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-896447" to be "Ready" ...
	I0919 23:15:50.727204  337922 node_ready.go:49] node "custom-flannel-896447" is "Ready"
	I0919 23:15:50.727250  337922 node_ready.go:38] duration metric: took 9.484224ms for node "custom-flannel-896447" to be "Ready" ...
	I0919 23:15:50.727268  337922 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:15:50.727428  337922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:15:51.012800  337922 api_server.go:72] duration metric: took 699.789515ms to wait for apiserver process to appear ...
	I0919 23:15:51.012836  337922 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:15:51.012864  337922 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0919 23:15:51.016264  337922 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0919 23:15:51.017923  337922 addons.go:514] duration metric: took 704.871462ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0919 23:15:51.021096  337922 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0919 23:15:51.022299  337922 api_server.go:141] control plane version: v1.34.0
	I0919 23:15:51.022326  337922 api_server.go:131] duration metric: took 9.483448ms to wait for apiserver health ...
	I0919 23:15:51.022335  337922 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:15:51.026144  337922 system_pods.go:59] 8 kube-system pods found
	I0919 23:15:51.026199  337922 system_pods.go:61] "coredns-66bc5c9577-l9sjz" [78579dbb-0cd3-4c5f-98d0-32ece641c810] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.026211  337922 system_pods.go:61] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.026224  337922 system_pods.go:61] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:51.026237  337922 system_pods.go:61] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:51.026248  337922 system_pods.go:61] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:15:51.026254  337922 system_pods.go:61] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:51.026260  337922 system_pods.go:61] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:51.026264  337922 system_pods.go:61] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Pending
	I0919 23:15:51.026308  337922 system_pods.go:74] duration metric: took 3.966887ms to wait for pod list to return data ...
	I0919 23:15:51.026323  337922 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:15:51.029356  337922 default_sa.go:45] found service account: "default"
	I0919 23:15:51.029382  337922 default_sa.go:55] duration metric: took 3.05277ms for default service account to be created ...
	I0919 23:15:51.029392  337922 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:15:51.032534  337922 system_pods.go:86] 8 kube-system pods found
	I0919 23:15:51.032570  337922 system_pods.go:89] "coredns-66bc5c9577-l9sjz" [78579dbb-0cd3-4c5f-98d0-32ece641c810] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.032581  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.032590  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:51.032601  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:51.032630  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:15:51.032646  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:51.032656  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:51.032667  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:15:51.032694  337922 retry.go:31] will retry after 235.77365ms: missing components: kube-dns, kube-proxy
	I0919 23:15:51.236585  337922 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-896447" context rescaled to 1 replicas
	I0919 23:15:51.273575  337922 system_pods.go:86] 8 kube-system pods found
	I0919 23:15:51.273617  337922 system_pods.go:89] "coredns-66bc5c9577-l9sjz" [78579dbb-0cd3-4c5f-98d0-32ece641c810] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.273626  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.273636  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:51.273647  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:51.273656  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:15:51.273688  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:51.273700  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:51.273716  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:15:51.273745  337922 retry.go:31] will retry after 343.041377ms: missing components: kube-dns, kube-proxy
	I0919 23:15:51.621514  337922 system_pods.go:86] 8 kube-system pods found
	I0919 23:15:51.621560  337922 system_pods.go:89] "coredns-66bc5c9577-l9sjz" [78579dbb-0cd3-4c5f-98d0-32ece641c810] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.621573  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.621582  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:51.621601  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:51.621613  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:15:51.621627  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:51.621635  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:51.621647  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:15:51.621666  337922 retry.go:31] will retry after 330.136086ms: missing components: kube-dns, kube-proxy
	I0919 23:15:51.956404  337922 system_pods.go:86] 8 kube-system pods found
	I0919 23:15:51.956464  337922 system_pods.go:89] "coredns-66bc5c9577-l9sjz" [78579dbb-0cd3-4c5f-98d0-32ece641c810] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.956472  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.956477  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:51.956486  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:51.956491  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:15:51.956496  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running
	I0919 23:15:51.956501  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:51.956506  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:15:51.956520  337922 retry.go:31] will retry after 392.437325ms: missing components: kube-dns
	I0919 23:15:52.354060  337922 system_pods.go:86] 8 kube-system pods found
	I0919 23:15:52.354093  337922 system_pods.go:89] "coredns-66bc5c9577-l9sjz" [78579dbb-0cd3-4c5f-98d0-32ece641c810] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:52.354101  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:52.354113  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:52.354121  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:52.354124  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:15:52.354129  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running
	I0919 23:15:52.354137  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:52.354142  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:15:52.354199  337922 retry.go:31] will retry after 536.53104ms: missing components: kube-dns
	I0919 23:15:52.895553  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:15:52.895582  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:52.895589  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:52.895597  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:52.895601  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:15:52.895607  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:52.895612  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:52.895616  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:15:52.895629  337922 retry.go:31] will retry after 923.672765ms: missing components: kube-dns
	I0919 23:15:53.823341  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:15:53.823382  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:53.823402  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:53.823415  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:53.823423  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:15:53.823435  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:53.823447  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:53.823455  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:15:53.823477  337922 retry.go:31] will retry after 1.077598414s: missing components: kube-dns
	W0919 23:15:53.326085  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:15:55.326690  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	I0919 23:15:53.993415  326932 system_pods.go:86] 9 kube-system pods found
	I0919 23:15:53.993450  326932 system_pods.go:89] "calico-kube-controllers-59556d9b4c-2fjg9" [8dc0b3b6-f557-4ad1-85ae-60ddc48950a7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0919 23:15:53.993462  326932 system_pods.go:89] "calico-node-r2l7z" [ebd611c4-409f-4cc4-8508-4c5892b758d8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0919 23:15:53.993472  326932 system_pods.go:89] "coredns-66bc5c9577-dldb4" [a59ee5bf-734d-4164-896a-639bb683ff7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:53.993477  326932 system_pods.go:89] "etcd-calico-896447" [41123509-0390-4020-808b-bce50ffb6c4f] Running
	I0919 23:15:53.993485  326932 system_pods.go:89] "kube-apiserver-calico-896447" [25b92dfb-471c-422d-998e-cde4285e1283] Running
	I0919 23:15:53.993491  326932 system_pods.go:89] "kube-controller-manager-calico-896447" [42b190c3-2f24-4dd1-8b90-58c440546dc7] Running
	I0919 23:15:53.993497  326932 system_pods.go:89] "kube-proxy-fwxkb" [634c6d7b-ea71-4998-b7ac-c792099dadd4] Running
	I0919 23:15:53.993503  326932 system_pods.go:89] "kube-scheduler-calico-896447" [e005a269-f2a5-41f2-b518-57da14a4830f] Running
	I0919 23:15:53.993508  326932 system_pods.go:89] "storage-provisioner" [0a6daeb2-d50a-4464-86cc-1f501ec95e29] Running
	I0919 23:15:53.993528  326932 retry.go:31] will retry after 11.0735947s: missing components: kube-dns
	I0919 23:15:54.906060  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:15:54.906098  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:54.906109  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:54.906117  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:15:54.906125  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:15:54.906132  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:54.906138  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:15:54.906144  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:15:54.906178  337922 retry.go:31] will retry after 1.283889614s: missing components: kube-dns
	I0919 23:15:56.194036  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:15:56.194067  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:56.194073  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:15:56.194080  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:15:56.194085  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:15:56.194090  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:56.194093  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:15:56.194098  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:15:56.194113  337922 retry.go:31] will retry after 1.121069777s: missing components: kube-dns
	I0919 23:15:57.319937  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:15:57.319972  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:57.319980  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:15:57.319988  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:15:57.319995  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:15:57.320002  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:57.320007  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:15:57.320013  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:15:57.320033  337922 retry.go:31] will retry after 1.960539688s: missing components: kube-dns
	I0919 23:15:59.285894  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:15:59.285929  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:59.285935  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:15:59.285942  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:15:59.285946  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:15:59.285951  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:59.285955  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:15:59.285959  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:15:59.285973  337922 retry.go:31] will retry after 2.809840366s: missing components: kube-dns
	W0919 23:15:57.327323  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:15:59.825005  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	I0919 23:16:02.100695  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:16:02.100735  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:02.100746  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:16:02.100753  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:16:02.100757  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:16:02.100762  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:16:02.100766  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:16:02.100770  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:16:02.100784  337922 retry.go:31] will retry after 3.200482563s: missing components: kube-dns
	W0919 23:16:01.825989  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:16:03.826331  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:16:05.826869  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	I0919 23:16:05.072876  326932 system_pods.go:86] 9 kube-system pods found
	I0919 23:16:05.072911  326932 system_pods.go:89] "calico-kube-controllers-59556d9b4c-2fjg9" [8dc0b3b6-f557-4ad1-85ae-60ddc48950a7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0919 23:16:05.072919  326932 system_pods.go:89] "calico-node-r2l7z" [ebd611c4-409f-4cc4-8508-4c5892b758d8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0919 23:16:05.072934  326932 system_pods.go:89] "coredns-66bc5c9577-dldb4" [a59ee5bf-734d-4164-896a-639bb683ff7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:05.072944  326932 system_pods.go:89] "etcd-calico-896447" [41123509-0390-4020-808b-bce50ffb6c4f] Running
	I0919 23:16:05.072951  326932 system_pods.go:89] "kube-apiserver-calico-896447" [25b92dfb-471c-422d-998e-cde4285e1283] Running
	I0919 23:16:05.072955  326932 system_pods.go:89] "kube-controller-manager-calico-896447" [42b190c3-2f24-4dd1-8b90-58c440546dc7] Running
	I0919 23:16:05.072961  326932 system_pods.go:89] "kube-proxy-fwxkb" [634c6d7b-ea71-4998-b7ac-c792099dadd4] Running
	I0919 23:16:05.072964  326932 system_pods.go:89] "kube-scheduler-calico-896447" [e005a269-f2a5-41f2-b518-57da14a4830f] Running
	I0919 23:16:05.072969  326932 system_pods.go:89] "storage-provisioner" [0a6daeb2-d50a-4464-86cc-1f501ec95e29] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:16:05.072988  326932 retry.go:31] will retry after 15.661468577s: missing components: kube-dns
	I0919 23:16:05.306364  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:16:05.306435  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:05.306445  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:16:05.306454  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:16:05.306458  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:16:05.306463  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:16:05.306468  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:16:05.306472  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:16:05.306486  337922 retry.go:31] will retry after 3.811447815s: missing components: kube-dns
	I0919 23:16:09.125696  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:16:09.125737  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:09.125747  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:16:09.125757  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:16:09.125763  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:16:09.125771  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:16:09.125777  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:16:09.125785  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:16:09.125806  337922 retry.go:31] will retry after 4.399926051s: missing components: kube-dns
	W0919 23:16:08.326079  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:16:10.826454  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	I0919 23:16:13.532009  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:16:13.532041  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:13.532047  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:16:13.532054  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:16:13.532059  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:16:13.532063  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:16:13.532068  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:16:13.532071  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:16:13.532085  337922 retry.go:31] will retry after 5.921906271s: missing components: kube-dns
	W0919 23:16:12.826970  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:16:15.325494  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	I0919 23:16:16.325747  340598 pod_ready.go:94] pod "coredns-66bc5c9577-qj565" is "Ready"
	I0919 23:16:16.325773  340598 pod_ready.go:86] duration metric: took 36.506135595s for pod "coredns-66bc5c9577-qj565" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:16.328735  340598 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-149888" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:16.333309  340598 pod_ready.go:94] pod "etcd-default-k8s-diff-port-149888" is "Ready"
	I0919 23:16:16.333333  340598 pod_ready.go:86] duration metric: took 4.572083ms for pod "etcd-default-k8s-diff-port-149888" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:16.336000  340598 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-149888" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:16.340535  340598 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-149888" is "Ready"
	I0919 23:16:16.340558  340598 pod_ready.go:86] duration metric: took 4.532781ms for pod "kube-apiserver-default-k8s-diff-port-149888" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:16.342854  340598 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-149888" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:16.523474  340598 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-149888" is "Ready"
	I0919 23:16:16.523504  340598 pod_ready.go:86] duration metric: took 180.619849ms for pod "kube-controller-manager-default-k8s-diff-port-149888" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:16.723944  340598 pod_ready.go:83] waiting for pod "kube-proxy-txcms" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:17.123269  340598 pod_ready.go:94] pod "kube-proxy-txcms" is "Ready"
	I0919 23:16:17.123300  340598 pod_ready.go:86] duration metric: took 399.331369ms for pod "kube-proxy-txcms" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:17.324363  340598 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-149888" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:17.724691  340598 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-149888" is "Ready"
	I0919 23:16:17.724717  340598 pod_ready.go:86] duration metric: took 400.321939ms for pod "kube-scheduler-default-k8s-diff-port-149888" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:17.724728  340598 pod_ready.go:40] duration metric: took 37.913532643s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:16:17.774479  340598 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:16:17.777134  340598 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-149888" cluster and "default" namespace by default
	I0919 23:16:19.460410  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:16:19.460441  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:19.460447  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:16:19.460454  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:16:19.460459  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:16:19.460464  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:16:19.460468  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:16:19.460473  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:16:19.460487  337922 retry.go:31] will retry after 7.530517256s: missing components: kube-dns
	I0919 23:16:20.744277  326932 system_pods.go:86] 9 kube-system pods found
	I0919 23:16:20.744319  326932 system_pods.go:89] "calico-kube-controllers-59556d9b4c-2fjg9" [8dc0b3b6-f557-4ad1-85ae-60ddc48950a7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0919 23:16:20.744335  326932 system_pods.go:89] "calico-node-r2l7z" [ebd611c4-409f-4cc4-8508-4c5892b758d8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0919 23:16:20.744345  326932 system_pods.go:89] "coredns-66bc5c9577-dldb4" [a59ee5bf-734d-4164-896a-639bb683ff7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:20.744352  326932 system_pods.go:89] "etcd-calico-896447" [41123509-0390-4020-808b-bce50ffb6c4f] Running
	I0919 23:16:20.744360  326932 system_pods.go:89] "kube-apiserver-calico-896447" [25b92dfb-471c-422d-998e-cde4285e1283] Running
	I0919 23:16:20.744366  326932 system_pods.go:89] "kube-controller-manager-calico-896447" [42b190c3-2f24-4dd1-8b90-58c440546dc7] Running
	I0919 23:16:20.744374  326932 system_pods.go:89] "kube-proxy-fwxkb" [634c6d7b-ea71-4998-b7ac-c792099dadd4] Running
	I0919 23:16:20.744389  326932 system_pods.go:89] "kube-scheduler-calico-896447" [e005a269-f2a5-41f2-b518-57da14a4830f] Running
	I0919 23:16:20.744395  326932 system_pods.go:89] "storage-provisioner" [0a6daeb2-d50a-4464-86cc-1f501ec95e29] Running
	I0919 23:16:20.744421  326932 retry.go:31] will retry after 24.317497144s: missing components: kube-dns
	I0919 23:16:26.998293  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:16:26.998331  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:26.998339  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:16:26.998348  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:16:26.998354  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:16:26.998363  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:16:26.998368  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:16:26.998376  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:16:26.998395  337922 retry.go:31] will retry after 7.159590412s: missing components: kube-dns
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	f7b7a30f8e989       523cad1a4df73       7 seconds ago        Exited              dashboard-metrics-scraper   3                   7b6f3d7472a9b       dashboard-metrics-scraper-6ffb444bf9-nzjcs
	01f4b9ca69414       6e38f40d628db       11 seconds ago       Running             storage-provisioner         4                   4dc3c0061c9e9       storage-provisioner
	75639c6a69c7f       07655ddf2eebe       43 seconds ago       Running             kubernetes-dashboard        0                   4b84ae45ce6bc       kubernetes-dashboard-855c9754f9-tjkd6
	911420cee9a03       56cc512116c8f       54 seconds ago       Running             busybox                     1                   0f9d3dd2727c6       busybox
	d5c7e98006716       52546a367cc9e       54 seconds ago       Running             coredns                     1                   2c8fe2fa4d8eb       coredns-66bc5c9577-qj565
	50bbcbe6da8c0       6e38f40d628db       54 seconds ago       Exited              storage-provisioner         3                   4dc3c0061c9e9       storage-provisioner
	442a72e42dd57       df0860106674d       54 seconds ago       Running             kube-proxy                  4                   01742fa0ef8bc       kube-proxy-txcms
	c797e480e1280       409467f978b4a       55 seconds ago       Running             kindnet-cni                 1                   faaa0cbb65b2f       kindnet-4nqpl
	665ad2965128a       a0af72f2ec6d6       About a minute ago   Running             kube-controller-manager     1                   f8c9032e570d2       kube-controller-manager-default-k8s-diff-port-149888
	ad0c48b900b49       46169d968e920       About a minute ago   Running             kube-scheduler              1                   66c1819be8b99       kube-scheduler-default-k8s-diff-port-149888
	91c6fdc1fceb1       5f1f5298c888d       About a minute ago   Running             etcd                        1                   44fc528002938       etcd-default-k8s-diff-port-149888
	9024edac09e01       90550c43ad2bc       About a minute ago   Running             kube-apiserver              1                   e51828479176c       kube-apiserver-default-k8s-diff-port-149888
	296905fadad35       56cc512116c8f       About a minute ago   Exited              busybox                     0                   f4394928246fd       busybox
	8fe3c9f630050       52546a367cc9e       About a minute ago   Exited              coredns                     0                   40c79732ed9ad       coredns-66bc5c9577-qj565
	351f4368e8712       df0860106674d       2 minutes ago        Exited              kube-proxy                  3                   c885f7a6b94c4       kube-proxy-txcms
	fc26366126b18       409467f978b4a       2 minutes ago        Exited              kindnet-cni                 0                   2f402c2a337cb       kindnet-4nqpl
	bbfb1c954fb10       46169d968e920       3 minutes ago        Exited              kube-scheduler              0                   41315b7fcfdd6       kube-scheduler-default-k8s-diff-port-149888
	c43b276ad6480       5f1f5298c888d       3 minutes ago        Exited              etcd                        0                   bd64faadeff7f       etcd-default-k8s-diff-port-149888
	c2e3a7b89e470       a0af72f2ec6d6       3 minutes ago        Exited              kube-controller-manager     0                   6d6d2b50fb9ff       kube-controller-manager-default-k8s-diff-port-149888
	6cb08d2f210ed       90550c43ad2bc       3 minutes ago        Exited              kube-apiserver              0                   64d233f4794ef       kube-apiserver-default-k8s-diff-port-149888
	
	
	==> containerd <==
	Sep 19 23:16:26 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:26.877493325Z" level=info msg="StartContainer for \"f7b7a30f8e989f86d14d26eb1f419fddf6480eb4bf6bfa06559d6ce4730d46c0\""
	Sep 19 23:16:26 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:26.947924694Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 23:16:26 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:26.950740908Z" level=info msg="StartContainer for \"f7b7a30f8e989f86d14d26eb1f419fddf6480eb4bf6bfa06559d6ce4730d46c0\" returns successfully"
	Sep 19 23:16:26 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:26.965006786Z" level=info msg="received exit event container_id:\"f7b7a30f8e989f86d14d26eb1f419fddf6480eb4bf6bfa06559d6ce4730d46c0\"  id:\"f7b7a30f8e989f86d14d26eb1f419fddf6480eb4bf6bfa06559d6ce4730d46c0\"  pid:2561  exit_status:1  exited_at:{seconds:1758323786  nanos:963941580}"
	Sep 19 23:16:27 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:27.003133936Z" level=info msg="shim disconnected" id=f7b7a30f8e989f86d14d26eb1f419fddf6480eb4bf6bfa06559d6ce4730d46c0 namespace=k8s.io
	Sep 19 23:16:27 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:27.003244373Z" level=warning msg="cleaning up after shim disconnected" id=f7b7a30f8e989f86d14d26eb1f419fddf6480eb4bf6bfa06559d6ce4730d46c0 namespace=k8s.io
	Sep 19 23:16:27 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:27.003284415Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 19 23:16:27 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:27.204965015Z" level=info msg="RemoveContainer for \"77be17c43496b56e40d07eda68898f1d52f0a4ab40c322c86030c94680335c0e\""
	Sep 19 23:16:27 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:27.214767229Z" level=info msg="RemoveContainer for \"77be17c43496b56e40d07eda68898f1d52f0a4ab40c322c86030c94680335c0e\" returns successfully"
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.035087579Z" level=info msg="StopPodSandbox for \"b36f8e8cae0db3ff438e95ed319b08410f36620ba95d32e847221ab05389d06c\""
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.035458371Z" level=info msg="TearDown network for sandbox \"b36f8e8cae0db3ff438e95ed319b08410f36620ba95d32e847221ab05389d06c\" successfully"
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.035493489Z" level=info msg="StopPodSandbox for \"b36f8e8cae0db3ff438e95ed319b08410f36620ba95d32e847221ab05389d06c\" returns successfully"
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.040113206Z" level=info msg="RemovePodSandbox for \"b36f8e8cae0db3ff438e95ed319b08410f36620ba95d32e847221ab05389d06c\""
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.040185433Z" level=info msg="Forcibly stopping sandbox \"b36f8e8cae0db3ff438e95ed319b08410f36620ba95d32e847221ab05389d06c\""
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.040302623Z" level=info msg="TearDown network for sandbox \"b36f8e8cae0db3ff438e95ed319b08410f36620ba95d32e847221ab05389d06c\" successfully"
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.048754867Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b36f8e8cae0db3ff438e95ed319b08410f36620ba95d32e847221ab05389d06c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.048871381Z" level=info msg="RemovePodSandbox \"b36f8e8cae0db3ff438e95ed319b08410f36620ba95d32e847221ab05389d06c\" returns successfully"
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.050147942Z" level=info msg="StopPodSandbox for \"448e3f788d1c7c5b4f0a528f94420299c25c19e163f4dbc584f88a490e67f8c8\""
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.050298152Z" level=info msg="TearDown network for sandbox \"448e3f788d1c7c5b4f0a528f94420299c25c19e163f4dbc584f88a490e67f8c8\" successfully"
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.050317485Z" level=info msg="StopPodSandbox for \"448e3f788d1c7c5b4f0a528f94420299c25c19e163f4dbc584f88a490e67f8c8\" returns successfully"
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.050885429Z" level=info msg="RemovePodSandbox for \"448e3f788d1c7c5b4f0a528f94420299c25c19e163f4dbc584f88a490e67f8c8\""
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.050919184Z" level=info msg="Forcibly stopping sandbox \"448e3f788d1c7c5b4f0a528f94420299c25c19e163f4dbc584f88a490e67f8c8\""
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.051269282Z" level=info msg="TearDown network for sandbox \"448e3f788d1c7c5b4f0a528f94420299c25c19e163f4dbc584f88a490e67f8c8\" successfully"
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.058870199Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"448e3f788d1c7c5b4f0a528f94420299c25c19e163f4dbc584f88a490e67f8c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.058996037Z" level=info msg="RemovePodSandbox \"448e3f788d1c7c5b4f0a528f94420299c25c19e163f4dbc584f88a490e67f8c8\" returns successfully"
	
	
	==> coredns [8fe3c9f630050a4562bdee872e3bd5b158ebb872b819c9704b64439e00342d40] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37991 - 35562 "HINFO IN 9064399029636666911.5004093786555544926. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.059647854s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d5c7e980067162f7d7cdd11137f7223d6574824e17220f7c25d8e3708be42a76] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45959 - 29957 "HINFO IN 3487927117264530873.7781742703901399107. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.068781999s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-149888
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-149888
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=default-k8s-diff-port-149888
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_13_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:13:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-149888
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:16:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:16:06 +0000   Fri, 19 Sep 2025 23:13:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:16:06 +0000   Fri, 19 Sep 2025 23:13:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:16:06 +0000   Fri, 19 Sep 2025 23:13:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:16:06 +0000   Fri, 19 Sep 2025 23:13:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-149888
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 e74d6dfd16154bd0b4ac1ae2d5aaa930
	  System UUID:                48f7b01c-0e5a-4c51-b5e5-65660304d365
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-qj565                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m56s
	  kube-system                 etcd-default-k8s-diff-port-149888                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m5s
	  kube-system                 kindnet-4nqpl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-149888             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m5s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-149888    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 kube-proxy-txcms                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-149888             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 metrics-server-746fcd58dc-hskrc                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         82s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nzjcs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tjkd6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 2m13s              kube-proxy       
	  Normal  Starting                 3m1s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m1s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m1s               kubelet          Node default-k8s-diff-port-149888 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s               kubelet          Node default-k8s-diff-port-149888 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s               kubelet          Node default-k8s-diff-port-149888 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m57s              node-controller  Node default-k8s-diff-port-149888 event: Registered Node default-k8s-diff-port-149888 in Controller
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x9 over 62s)  kubelet          Node default-k8s-diff-port-149888 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x7 over 62s)  kubelet          Node default-k8s-diff-port-149888 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x7 over 62s)  kubelet          Node default-k8s-diff-port-149888 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           53s                node-controller  Node default-k8s-diff-port-149888 event: Registered Node default-k8s-diff-port-149888 in Controller
	  Normal  Starting                 2s                 kubelet          Starting kubelet.
	  Normal  Starting                 2s                 kubelet          Starting kubelet.
	  Normal  Starting                 1s                 kubelet          Starting kubelet.
	
	
	==> dmesg <==
	[  +0.995983] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.504855] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500830] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.994952] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.505945] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501588] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.993278] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.507907] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500995] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.992251] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.509112] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501500] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989799] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.510643] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501653] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989383] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.511830] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500482] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989056] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.512088] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500241] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501649] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501291] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501422] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[Sep19 23:13] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	
	
	==> etcd [91c6fdc1fceb1c8c70caa847aea8dfc0a97f915a36dc2754d386368a179f0728] <==
	{"level":"warn","ts":"2025-09-19T23:15:37.740607Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.577396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/default-k8s-diff-port-149888.1866d21e77b76b1e\" limit:1 ","response":"range_response_count:1 size:793"}
	{"level":"info","ts":"2025-09-19T23:15:37.740643Z","caller":"traceutil/trace.go:172","msg":"trace[598269319] range","detail":"{range_begin:/registry/events/default/default-k8s-diff-port-149888.1866d21e77b76b1e; range_end:; response_count:1; response_revision:562; }","duration":"107.626007ms","start":"2025-09-19T23:15:37.633005Z","end":"2025-09-19T23:15:37.740631Z","steps":["trace[598269319] 'agreement among raft nodes before linearized reading'  (duration: 107.504715ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:15:37.930915Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.510485ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-746fcd58dc-hskrc\" limit:1 ","response":"range_response_count:1 size:4384"}
	{"level":"info","ts":"2025-09-19T23:15:37.930985Z","caller":"traceutil/trace.go:172","msg":"trace[501430462] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-746fcd58dc-hskrc; range_end:; response_count:1; response_revision:564; }","duration":"149.59428ms","start":"2025-09-19T23:15:37.781372Z","end":"2025-09-19T23:15:37.930967Z","steps":["trace[501430462] 'agreement among raft nodes before linearized reading'  (duration: 59.090763ms)","trace[501430462] 'range keys from in-memory index tree'  (duration: 90.310594ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:15:37.931009Z","caller":"traceutil/trace.go:172","msg":"trace[1314534809] transaction","detail":"{read_only:false; response_revision:565; number_of_response:1; }","duration":"150.096324ms","start":"2025-09-19T23:15:37.780898Z","end":"2025-09-19T23:15:37.930994Z","steps":["trace[1314534809] 'process raft request'  (duration: 59.608146ms)","trace[1314534809] 'compare'  (duration: 90.28644ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:15:37.931083Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.463579ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" limit:1 ","response":"range_response_count:1 size:992"}
	{"level":"info","ts":"2025-09-19T23:15:37.931137Z","caller":"traceutil/trace.go:172","msg":"trace[2002339098] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:1; response_revision:565; }","duration":"149.524374ms","start":"2025-09-19T23:15:37.781599Z","end":"2025-09-19T23:15:37.931123Z","steps":["trace[2002339098] 'agreement among raft nodes before linearized reading'  (duration: 149.378912ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:15:37.931274Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.984776ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:node-problem-detector\" limit:1 ","response":"range_response_count:1 size:655"}
	{"level":"info","ts":"2025-09-19T23:15:37.931314Z","caller":"traceutil/trace.go:172","msg":"trace[1441136243] range","detail":"{range_begin:/registry/clusterroles/system:node-problem-detector; range_end:; response_count:1; response_revision:565; }","duration":"148.030013ms","start":"2025-09-19T23:15:37.783274Z","end":"2025-09-19T23:15:37.931304Z","steps":["trace[1441136243] 'agreement among raft nodes before linearized reading'  (duration: 147.913859ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:15:38.346045Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.796155ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T23:15:38.346120Z","caller":"traceutil/trace.go:172","msg":"trace[437704354] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:571; }","duration":"156.891144ms","start":"2025-09-19T23:15:38.189216Z","end":"2025-09-19T23:15:38.346107Z","steps":["trace[437704354] 'agreement among raft nodes before linearized reading'  (duration: 91.19849ms)","trace[437704354] 'range keys from in-memory index tree'  (duration: 65.556724ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:15:38.346412Z","caller":"traceutil/trace.go:172","msg":"trace[1617977273] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"206.929494ms","start":"2025-09-19T23:15:38.139458Z","end":"2025-09-19T23:15:38.346388Z","steps":["trace[1617977273] 'process raft request'  (duration: 140.735783ms)","trace[1617977273] 'compare'  (duration: 65.877219ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:15:38.346450Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.646237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:certificates.k8s.io:kube-apiserver-client-approver\" limit:1 ","response":"range_response_count:1 size:700"}
	{"level":"info","ts":"2025-09-19T23:15:38.346496Z","caller":"traceutil/trace.go:172","msg":"trace[2088407094] range","detail":"{range_begin:/registry/clusterroles/system:certificates.k8s.io:kube-apiserver-client-approver; range_end:; response_count:1; response_revision:572; }","duration":"124.706762ms","start":"2025-09-19T23:15:38.221777Z","end":"2025-09-19T23:15:38.346484Z","steps":["trace[2088407094] 'agreement among raft nodes before linearized reading'  (duration: 124.556854ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:15:38.346514Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.991006ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/default-k8s-diff-port-149888.1866d21e77b76b1e\" limit:1 ","response":"range_response_count:1 size:793"}
	{"level":"info","ts":"2025-09-19T23:15:38.346545Z","caller":"traceutil/trace.go:172","msg":"trace[702841334] range","detail":"{range_begin:/registry/events/default/default-k8s-diff-port-149888.1866d21e77b76b1e; range_end:; response_count:1; response_revision:572; }","duration":"124.029853ms","start":"2025-09-19T23:15:38.222508Z","end":"2025-09-19T23:15:38.346537Z","steps":["trace[702841334] 'agreement among raft nodes before linearized reading'  (duration: 123.892254ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:15:38.472627Z","caller":"traceutil/trace.go:172","msg":"trace[1485426770] linearizableReadLoop","detail":"{readStateIndex:619; appliedIndex:619; }","duration":"123.952756ms","start":"2025-09-19T23:15:38.348647Z","end":"2025-09-19T23:15:38.472600Z","steps":["trace[1485426770] 'read index received'  (duration: 123.94351ms)","trace[1485426770] 'applied index is now lower than readState.Index'  (duration: 7.5µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:15:38.621407Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"272.734927ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver\" limit:1 ","response":"range_response_count:1 size:724"}
	{"level":"info","ts":"2025-09-19T23:15:38.621559Z","caller":"traceutil/trace.go:172","msg":"trace[655080171] range","detail":"{range_begin:/registry/clusterroles/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver; range_end:; response_count:1; response_revision:572; }","duration":"272.870407ms","start":"2025-09-19T23:15:38.348633Z","end":"2025-09-19T23:15:38.621504Z","steps":["trace[655080171] 'agreement among raft nodes before linearized reading'  (duration: 124.064118ms)","trace[655080171] 'range keys from in-memory index tree'  (duration: 148.550822ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:15:38.622106Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.864791ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873788782990676015 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-149888.1866d21e77b76b1e\" mod_revision:568 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-149888.1866d21e77b76b1e\" value_size:690 lease:4650416746135900089 >> failure:<request_range:<key:\"/registry/events/default/default-k8s-diff-port-149888.1866d21e77b76b1e\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:15:38.622233Z","caller":"traceutil/trace.go:172","msg":"trace[1478913785] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"273.906277ms","start":"2025-09-19T23:15:38.348310Z","end":"2025-09-19T23:15:38.622216Z","steps":["trace[1478913785] 'process raft request'  (duration: 124.374288ms)","trace[1478913785] 'compare'  (duration: 148.747187ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:15:38.818947Z","caller":"traceutil/trace.go:172","msg":"trace[1836003029] linearizableReadLoop","detail":"{readStateIndex:623; appliedIndex:623; }","duration":"114.435179ms","start":"2025-09-19T23:15:38.704489Z","end":"2025-09-19T23:15:38.818924Z","steps":["trace[1836003029] 'read index received'  (duration: 114.426566ms)","trace[1836003029] 'applied index is now lower than readState.Index'  (duration: 7.53µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:15:38.895390Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.878196ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:disruption-controller\" limit:1 ","response":"range_response_count:1 size:972"}
	{"level":"info","ts":"2025-09-19T23:15:38.895497Z","caller":"traceutil/trace.go:172","msg":"trace[1235492677] range","detail":"{range_begin:/registry/clusterroles/system:controller:disruption-controller; range_end:; response_count:1; response_revision:576; }","duration":"190.973578ms","start":"2025-09-19T23:15:38.704478Z","end":"2025-09-19T23:15:38.895452Z","steps":["trace[1235492677] 'agreement among raft nodes before linearized reading'  (duration: 114.530582ms)","trace[1235492677] 'range keys from in-memory index tree'  (duration: 76.238031ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:15:38.895506Z","caller":"traceutil/trace.go:172","msg":"trace[1349552410] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"193.145386ms","start":"2025-09-19T23:15:38.702350Z","end":"2025-09-19T23:15:38.895495Z","steps":["trace[1349552410] 'process raft request'  (duration: 116.616939ms)","trace[1349552410] 'compare'  (duration: 76.384046ms)"],"step_count":2}
	
	
	==> etcd [c43b276ad64808b3638f48fb95a466e4ac5a6ca6b0f2e698462337fbab846497] <==
	{"level":"warn","ts":"2025-09-19T23:13:27.815430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:27.886348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58058","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T23:13:30.004284Z","caller":"traceutil/trace.go:172","msg":"trace[232577443] transaction","detail":"{read_only:false; response_revision:134; number_of_response:1; }","duration":"146.808563ms","start":"2025-09-19T23:13:29.857449Z","end":"2025-09-19T23:13:30.004258Z","steps":["trace[232577443] 'process raft request'  (duration: 58.03485ms)","trace[232577443] 'compare'  (duration: 88.59504ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:13:30.232638Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.408229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:validatingadmissionpolicy-status-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T23:13:30.232736Z","caller":"traceutil/trace.go:172","msg":"trace[780047360] range","detail":"{range_begin:/registry/clusterroles/system:controller:validatingadmissionpolicy-status-controller; range_end:; response_count:0; response_revision:136; }","duration":"126.565927ms","start":"2025-09-19T23:13:30.106148Z","end":"2025-09-19T23:13:30.232714Z","steps":["trace[780047360] 'range keys from in-memory index tree'  (duration: 126.282286ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:13:30.649310Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.435828ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T23:13:30.649390Z","caller":"traceutil/trace.go:172","msg":"trace[1505618848] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:143; }","duration":"156.543726ms","start":"2025-09-19T23:13:30.492829Z","end":"2025-09-19T23:13:30.649373Z","steps":["trace[1505618848] 'agreement among raft nodes before linearized reading'  (duration: 78.483421ms)","trace[1505618848] 'range keys from in-memory index tree'  (duration: 77.867014ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:13:30.649472Z","caller":"traceutil/trace.go:172","msg":"trace[374656585] transaction","detail":"{read_only:false; response_revision:144; number_of_response:1; }","duration":"264.988542ms","start":"2025-09-19T23:13:30.384417Z","end":"2025-09-19T23:13:30.649405Z","steps":["trace[374656585] 'process raft request'  (duration: 187.031893ms)","trace[374656585] 'compare'  (duration: 77.744988ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:13:30.912700Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.219233ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873788782958063734 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:node-proxier\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:node-proxier\" value_size:627 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:13:30.913259Z","caller":"traceutil/trace.go:172","msg":"trace[335778829] transaction","detail":"{read_only:false; response_revision:145; number_of_response:1; }","duration":"258.990155ms","start":"2025-09-19T23:13:30.654137Z","end":"2025-09-19T23:13:30.913128Z","steps":["trace[335778829] 'process raft request'  (duration: 128.437082ms)","trace[335778829] 'compare'  (duration: 129.104569ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:13:31.169920Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.012706ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873788782958063736 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:kube-controller-manager\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:kube-controller-manager\" value_size:662 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:13:31.170019Z","caller":"traceutil/trace.go:172","msg":"trace[651860733] transaction","detail":"{read_only:false; response_revision:146; number_of_response:1; }","duration":"251.381159ms","start":"2025-09-19T23:13:30.918620Z","end":"2025-09-19T23:13:31.170001Z","steps":["trace[651860733] 'process raft request'  (duration: 122.215941ms)","trace[651860733] 'compare'  (duration: 128.853571ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:13:31.425478Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.564266ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873788782958063738 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:kube-dns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:kube-dns\" value_size:606 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:13:31.425551Z","caller":"traceutil/trace.go:172","msg":"trace[1663603764] transaction","detail":"{read_only:false; response_revision:147; number_of_response:1; }","duration":"249.884129ms","start":"2025-09-19T23:13:31.175657Z","end":"2025-09-19T23:13:31.425541Z","steps":["trace[1663603764] 'process raft request'  (duration: 121.193789ms)","trace[1663603764] 'compare'  (duration: 128.43632ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:13:31.617572Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.73745ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T23:13:31.617660Z","caller":"traceutil/trace.go:172","msg":"trace[747472130] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:147; }","duration":"124.839931ms","start":"2025-09-19T23:13:31.492800Z","end":"2025-09-19T23:13:31.617640Z","steps":["trace[747472130] 'agreement among raft nodes before linearized reading'  (duration: 61.935669ms)","trace[747472130] 'range keys from in-memory index tree'  (duration: 62.765629ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:13:31.617739Z","caller":"traceutil/trace.go:172","msg":"trace[77664982] transaction","detail":"{read_only:false; response_revision:148; number_of_response:1; }","duration":"187.311583ms","start":"2025-09-19T23:13:31.430403Z","end":"2025-09-19T23:13:31.617714Z","steps":["trace[77664982] 'process raft request'  (duration: 124.392ms)","trace[77664982] 'compare'  (duration: 62.693129ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:13:31.788775Z","caller":"traceutil/trace.go:172","msg":"trace[858407443] transaction","detail":"{read_only:false; response_revision:149; number_of_response:1; }","duration":"166.63206ms","start":"2025-09-19T23:13:31.622117Z","end":"2025-09-19T23:13:31.788749Z","steps":["trace[858407443] 'process raft request'  (duration: 96.994615ms)","trace[858407443] 'compare'  (duration: 69.496757ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:14:11.487036Z","caller":"traceutil/trace.go:172","msg":"trace[396052314] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"157.199919ms","start":"2025-09-19T23:14:11.329811Z","end":"2025-09-19T23:14:11.487010Z","steps":["trace[396052314] 'process raft request'  (duration: 92.008901ms)","trace[396052314] 'compare'  (duration: 65.059975ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:14:11.810827Z","caller":"traceutil/trace.go:172","msg":"trace[754889133] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"122.202087ms","start":"2025-09-19T23:14:11.688602Z","end":"2025-09-19T23:14:11.810804Z","steps":["trace[754889133] 'process raft request'  (duration: 121.963074ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:14:37.609149Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.078707ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873788782958064630 > lease_revoke:<id:408999644103b3a8>","response":"size:28"}
	{"level":"info","ts":"2025-09-19T23:15:05.541640Z","caller":"traceutil/trace.go:172","msg":"trace[1056613088] linearizableReadLoop","detail":"{readStateIndex:520; appliedIndex:520; }","duration":"170.332931ms","start":"2025-09-19T23:15:05.371270Z","end":"2025-09-19T23:15:05.541603Z","steps":["trace[1056613088] 'read index received'  (duration: 170.317954ms)","trace[1056613088] 'applied index is now lower than readState.Index'  (duration: 13.169µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:15:05.541774Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.485971ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T23:15:05.541896Z","caller":"traceutil/trace.go:172","msg":"trace[1110311917] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:486; }","duration":"170.626639ms","start":"2025-09-19T23:15:05.371256Z","end":"2025-09-19T23:15:05.541883Z","steps":["trace[1110311917] 'agreement among raft nodes before linearized reading'  (duration: 170.410354ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:15:05.541808Z","caller":"traceutil/trace.go:172","msg":"trace[805537127] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"189.143186ms","start":"2025-09-19T23:15:05.352650Z","end":"2025-09-19T23:15:05.541793Z","steps":["trace[805537127] 'process raft request'  (duration: 188.991726ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:16:34 up  1:58,  0 users,  load average: 4.26, 4.11, 2.74
	Linux default-k8s-diff-port-149888 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [c797e480e1280f5c22736b8a5bf38e2534ffb233c66eb782ab3f678000ec15e1] <==
	I0919 23:15:39.566984       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0919 23:15:39.567351       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0919 23:15:39.568610       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:15:39.568636       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:15:39.568672       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:15:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:15:39.865685       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:15:39.896176       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:15:39.896517       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:15:39.897002       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0919 23:16:09.803825       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0919 23:16:09.803822       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0919 23:16:09.898679       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0919 23:16:09.898682       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0919 23:16:11.397222       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:16:11.397266       1 metrics.go:72] Registering metrics
	I0919 23:16:11.397388       1 controller.go:711] "Syncing nftables rules"
	I0919 23:16:19.805324       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:16:19.805386       1 main.go:301] handling current node
	I0919 23:16:29.803280       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:16:29.803356       1 main.go:301] handling current node
	
	
	==> kindnet [fc26366126b18bc013992c759f1ace9b13c7b3a4d0bf6ba034cf10b8bc295925] <==
	I0919 23:13:39.382290       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:13:39.382312       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:13:39.382344       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:13:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:13:39.613444       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:13:39.613509       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:13:39.613524       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:13:39.614133       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0919 23:14:09.614611       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0919 23:14:09.614611       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0919 23:14:09.614615       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0919 23:14:09.614759       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0919 23:14:40.793926       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0919 23:14:40.930995       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0919 23:14:40.995735       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0919 23:14:41.169090       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0919 23:14:43.513784       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:14:43.513821       1 metrics.go:72] Registering metrics
	I0919 23:14:43.513943       1 controller.go:711] "Syncing nftables rules"
	I0919 23:14:49.616859       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:14:49.616901       1 main.go:301] handling current node
	I0919 23:14:59.613111       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:14:59.613150       1 main.go:301] handling current node
	I0919 23:15:09.616806       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:15:09.616843       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6cb08d2f210eda6eb6b104b96ac64e816b7fab2dd877c455b3d32f16fa032f13] <==
	I0919 23:13:38.294791       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 23:13:38.301329       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 23:13:38.394148       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 23:14:30.971448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 23:14:44.541836       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 23:15:11.493563       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:38046: use of closed network connection
	I0919 23:15:12.320762       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 23:15:12.332564       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:15:12.332659       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0919 23:15:12.332728       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0919 23:15:12.432751       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.111.57.168"}
	W0919 23:15:12.443016       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:15:12.443084       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0919 23:15:12.445732       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	W0919 23:15:12.450947       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:15:12.451007       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [9024edac09e016d0476e31f1755919ddf4504e371e70d553d6b26c853be5cb3a] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:15:36.228119       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 23:15:36.484364       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0919 23:15:36.578780       1 controller.go:667] quota admission added evaluator for: namespaces
	I0919 23:15:36.668531       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 23:15:36.681720       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 23:15:36.705609       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0919 23:15:37.130260       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 23:15:37.182127       1 handler_proxy.go:99] no RequestInfo found in the context
	W0919 23:15:37.182168       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:15:37.182206       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 23:15:37.182225       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0919 23:15:37.182251       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:15:37.183393       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 23:15:37.261961       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.239.169"}
	I0919 23:15:37.741018       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.148.91"}
	I0919 23:15:41.563536       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 23:15:41.908460       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 23:15:41.958633       1 controller.go:667] quota admission added evaluator for: endpoints
	I0919 23:15:41.958634       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [665ad2965128a1aa390f367ccbca624d01ee6bee89aa4b03acffe494908e88b8] <==
	I0919 23:15:41.555813       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 23:15:41.556141       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 23:15:41.560856       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 23:15:41.561054       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0919 23:15:41.561266       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0919 23:15:41.561386       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0919 23:15:41.561401       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0919 23:15:41.561409       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0919 23:15:41.565423       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 23:15:41.565868       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:15:41.565884       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 23:15:41.565892       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 23:15:41.570304       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 23:15:41.573437       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 23:15:41.573516       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:15:41.578117       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 23:15:41.578336       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 23:15:41.582467       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0919 23:15:41.582566       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 23:15:41.587207       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 23:15:41.589527       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 23:15:41.590800       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0919 23:15:41.597148       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0919 23:16:11.578829       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:16:11.605838       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [c2e3a7b89e4703676da0d2bd9bc89da04f199a71876c7e42f6ed8afbc9fd9473] <==
	I0919 23:13:37.389745       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 23:13:37.389794       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 23:13:37.390263       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 23:13:37.390285       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 23:13:37.390318       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 23:13:37.390534       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 23:13:37.390632       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0919 23:13:37.390793       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 23:13:37.390811       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 23:13:37.392050       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 23:13:37.393339       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 23:13:37.394470       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0919 23:13:37.394614       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 23:13:37.394622       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0919 23:13:37.394681       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0919 23:13:37.394693       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0919 23:13:37.394700       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0919 23:13:37.404865       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:13:37.407204       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-149888" podCIDRs=["10.244.0.0/24"]
	I0919 23:13:37.410946       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0919 23:13:37.421263       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 23:13:37.431590       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:13:37.439955       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:13:37.439977       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 23:13:37.439984       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [351f4368e8712652bd68f0bd0ebb515c4f49fef1d60d7f5a8189bd9bb301dfa1] <==
	I0919 23:14:20.435528       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:14:20.506456       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:14:20.607146       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:14:20.607208       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0919 23:14:20.607333       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:14:20.637634       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:14:20.637768       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:14:20.645510       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:14:20.646061       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:14:20.646085       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:14:20.647662       1 config.go:200] "Starting service config controller"
	I0919 23:14:20.647686       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:14:20.647708       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:14:20.647722       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:14:20.647738       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:14:20.647743       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:14:20.647764       1 config.go:309] "Starting node config controller"
	I0919 23:14:20.647769       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:14:20.748316       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:14:20.748357       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 23:14:20.748372       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 23:14:20.748391       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [442a72e42dd57d27df7f19e48129f29be808a95cf0062d2de0da9deebbf13a6b] <==
	I0919 23:15:39.465183       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:15:39.554243       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:15:39.655063       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:15:39.655136       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0919 23:15:39.655412       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:15:39.733481       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:15:39.733552       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:15:39.742350       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:15:39.742787       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:15:39.742822       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:15:39.745983       1 config.go:200] "Starting service config controller"
	I0919 23:15:39.746353       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:15:39.749378       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:15:39.746370       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:15:39.749834       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:15:39.748195       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:15:39.746909       1 config.go:309] "Starting node config controller"
	I0919 23:15:39.757752       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:15:39.757914       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:15:39.850132       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 23:15:39.857795       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 23:15:39.857820       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ad0c48b900b49e90cdbef611d4a6547e0ed3c32d04d88e902443a2aa626145e0] <==
	I0919 23:15:34.612824       1 serving.go:386] Generated self-signed cert in-memory
	I0919 23:15:36.201174       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 23:15:36.201216       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:15:36.208947       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0919 23:15:36.209068       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0919 23:15:36.209219       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 23:15:36.209250       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 23:15:36.209281       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:15:36.209290       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:15:36.209457       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 23:15:36.209539       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 23:15:36.309293       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0919 23:15:36.309342       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 23:15:36.309386       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [bbfb1c954fb1034180e24edeaa8f8df98c52266fc3bff9938f32230a087e7bf7] <==
	E0919 23:13:28.451271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 23:13:29.268227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 23:13:29.277974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 23:13:29.331758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0919 23:13:29.434023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 23:13:29.509447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 23:13:29.635330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 23:13:29.637354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 23:13:29.652742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 23:13:29.682825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 23:13:29.694381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 23:13:29.696814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 23:13:29.697933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 23:13:29.840967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 23:13:29.890173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 23:13:29.901540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 23:13:29.915344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 23:13:29.938268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 23:13:29.983087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 23:13:29.998143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 23:13:30.956189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0919 23:13:31.186936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 23:13:31.354332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 23:13:31.602296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I0919 23:13:32.444246       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: E0919 23:16:34.398024    3212 file_linux.go:61] "Unable to read config path" err="unable to create inotify: too many open files" path="/etc/kubernetes/manifests"
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.399844    3212 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.7.27" apiVersion="v1"
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.400588    3212 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.400628    3212 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: E0919 23:16:34.400689    3212 plugins.go:580] "Error initializing dynamic plugin prober" err="error initializing watcher: too many open files"
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.401547    3212 server.go:1262] "Started kubelet"
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.401644    3212 server.go:180] "Starting to listen" address="0.0.0.0" port=10250
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.401663    3212 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.401953    3212 server_v1.go:49] "podresources" method="list" useActivePods=true
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.402237    3212 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.403019    3212 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.405024    3212 volume_manager.go:313] "Starting Kubelet Volume Manager"
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.408880    3212 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: E0919 23:16:34.409003    3212 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"default-k8s-diff-port-149888\" not found"
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.409897    3212 reconciler.go:29] "Reconciler: start to sync state"
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.413456    3212 factory.go:223] Registration of the systemd container factory successfully
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.413668    3212 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.415501    3212 server.go:310] "Adding debug handlers to kubelet server"
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.420066    3212 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: E0919 23:16:34.420175    3212 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: I0919 23:16:34.424439    3212 factory.go:223] Registration of the containerd container factory successfully
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: E0919 23:16:34.425691    3212 manager.go:294] Registration of the raw container factory failed: inotify_init: too many open files
	Sep 19 23:16:34 default-k8s-diff-port-149888 kubelet[3212]: E0919 23:16:34.425738    3212 kubelet.go:1686] "Failed to start cAdvisor" err="inotify_init: too many open files"
	Sep 19 23:16:34 default-k8s-diff-port-149888 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Sep 19 23:16:34 default-k8s-diff-port-149888 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	
	==> kubernetes-dashboard [75639c6a69c7fb7b2b9402fbd69fec246c57cfd5a262d9cd90c13979bd1c85c0] <==
	2025/09/19 23:15:50 Starting overwatch
	2025/09/19 23:15:50 Using namespace: kubernetes-dashboard
	2025/09/19 23:15:50 Using in-cluster config to connect to apiserver
	2025/09/19 23:15:50 Using secret token for csrf signing
	2025/09/19 23:15:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 23:15:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/19 23:15:50 Successful initial request to the apiserver, version: v1.34.0
	2025/09/19 23:15:50 Generating JWE encryption key
	2025/09/19 23:15:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/19 23:15:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/19 23:15:50 Initializing JWE encryption key from synchronized object
	2025/09/19 23:15:50 Creating in-cluster Sidecar client
	2025/09/19 23:15:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:15:50 Serving insecurely on HTTP port: 9090
	2025/09/19 23:16:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [01f4b9ca69414790581ceaaa1616802fe23fcfbd5472536dee2ae97165537533] <==
	I0919 23:16:22.956609       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 23:16:22.966937       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 23:16:22.966995       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0919 23:16:22.969947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:16:26.425401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:16:31.690007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [50bbcbe6da8c015a54149eff64a6f8dfce18bf32136ab051fee00f8082de50cb] <==
	I0919 23:15:39.320795       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 23:16:09.325313       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888: exit status 2 (365.442717ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-149888 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-hskrc
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-149888 describe pod metrics-server-746fcd58dc-hskrc
E0919 23:16:35.762520   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/old-k8s-version-757990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-149888 describe pod metrics-server-746fcd58dc-hskrc: exit status 1 (86.549562ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-hskrc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-149888 describe pod metrics-server-746fcd58dc-hskrc: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-149888
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-149888:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "099d669e8ec59e0380498b09d42c20dc8bf9ac466be9ffe01804379b352bff31",
	        "Created": "2025-09-19T23:12:53.067980944Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 341269,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T23:15:26.233239415Z",
	            "FinishedAt": "2025-09-19T23:15:25.162757657Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/099d669e8ec59e0380498b09d42c20dc8bf9ac466be9ffe01804379b352bff31/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/099d669e8ec59e0380498b09d42c20dc8bf9ac466be9ffe01804379b352bff31/hostname",
	        "HostsPath": "/var/lib/docker/containers/099d669e8ec59e0380498b09d42c20dc8bf9ac466be9ffe01804379b352bff31/hosts",
	        "LogPath": "/var/lib/docker/containers/099d669e8ec59e0380498b09d42c20dc8bf9ac466be9ffe01804379b352bff31/099d669e8ec59e0380498b09d42c20dc8bf9ac466be9ffe01804379b352bff31-json.log",
	        "Name": "/default-k8s-diff-port-149888",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-149888:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-149888",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "099d669e8ec59e0380498b09d42c20dc8bf9ac466be9ffe01804379b352bff31",
	                "LowerDir": "/var/lib/docker/overlay2/d230332dd75b686ff154fa7b4d62c612019f129f809a566c88f37f91a9e6f165-init/diff:/var/lib/docker/overlay2/a03f655342f0080430c48b45e821bb7f49cd991d97a882d9cb55b520de280887/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d230332dd75b686ff154fa7b4d62c612019f129f809a566c88f37f91a9e6f165/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d230332dd75b686ff154fa7b4d62c612019f129f809a566c88f37f91a9e6f165/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d230332dd75b686ff154fa7b4d62c612019f129f809a566c88f37f91a9e6f165/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-149888",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-149888/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-149888",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-149888",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-149888",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf3f0544294ce8810b61d63756a175d4ee318bfeaca508f45aa96fab666a84f7",
	            "SandboxKey": "/var/run/docker/netns/cf3f0544294c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-149888": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:4b:e2:b3:29:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0971ed35276bed5da4f47ad531607cc67550d8b9076fbbdee7b98bcf6f2f6f37",
	                    "EndpointID": "ea06d3789a79938a414a89b4fc1901c8e31ca6f4fc05a030da47abc9649a2f4c",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-149888",
	                        "099d669e8ec5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888: exit status 2 (380.067149ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-149888 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-149888 logs -n 25: (1.962138508s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-896447 sudo systemctl status docker --all --full --no-pager                                                                                                         │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │                     │
	│ ssh     │ -p kindnet-896447 sudo systemctl cat docker --no-pager                                                                                                                         │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo cat /etc/docker/daemon.json                                                                                                                             │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │                     │
	│ ssh     │ -p kindnet-896447 sudo docker system info                                                                                                                                      │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │                     │
	│ ssh     │ -p kindnet-896447 sudo systemctl status cri-docker --all --full --no-pager                                                                                                     │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-149888 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                        │ default-k8s-diff-port-149888 │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ start   │ -p default-k8s-diff-port-149888 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ default-k8s-diff-port-149888 │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:16 UTC │
	│ ssh     │ -p kindnet-896447 sudo systemctl cat cri-docker --no-pager                                                                                                                     │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │                     │
	│ ssh     │ -p kindnet-896447 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                          │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo cri-dockerd --version                                                                                                                                   │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo systemctl status containerd --all --full --no-pager                                                                                                     │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo systemctl cat containerd --no-pager                                                                                                                     │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo cat /lib/systemd/system/containerd.service                                                                                                              │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo cat /etc/containerd/config.toml                                                                                                                         │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo containerd config dump                                                                                                                                  │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo systemctl status crio --all --full --no-pager                                                                                                           │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │                     │
	│ ssh     │ -p kindnet-896447 sudo systemctl cat crio --no-pager                                                                                                                           │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                 │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ ssh     │ -p kindnet-896447 sudo crio config                                                                                                                                             │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ delete  │ -p kindnet-896447                                                                                                                                                              │ kindnet-896447               │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │ 19 Sep 25 23:15 UTC │
	│ start   │ -p enable-default-cni-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd          │ enable-default-cni-896447    │ jenkins │ v1.37.0 │ 19 Sep 25 23:15 UTC │                     │
	│ image   │ default-k8s-diff-port-149888 image list --format=json                                                                                                                          │ default-k8s-diff-port-149888 │ jenkins │ v1.37.0 │ 19 Sep 25 23:16 UTC │ 19 Sep 25 23:16 UTC │
	│ pause   │ -p default-k8s-diff-port-149888 --alsologtostderr -v=1                                                                                                                         │ default-k8s-diff-port-149888 │ jenkins │ v1.37.0 │ 19 Sep 25 23:16 UTC │ 19 Sep 25 23:16 UTC │
	│ unpause │ -p default-k8s-diff-port-149888 --alsologtostderr -v=1                                                                                                                         │ default-k8s-diff-port-149888 │ jenkins │ v1.37.0 │ 19 Sep 25 23:16 UTC │ 19 Sep 25 23:16 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:15:32
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:15:32.908800  344703 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:15:32.909449  344703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:15:32.909464  344703 out.go:374] Setting ErrFile to fd 2...
	I0919 23:15:32.909471  344703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:15:32.909954  344703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 23:15:32.910752  344703 out.go:368] Setting JSON to false
	I0919 23:15:32.912137  344703 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7077,"bootTime":1758316656,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:15:32.912252  344703 start.go:140] virtualization: kvm guest
	I0919 23:15:32.916948  344703 out.go:179] * [enable-default-cni-896447] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:15:32.919033  344703 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:15:32.919086  344703 notify.go:220] Checking for updates...
	I0919 23:15:32.922145  344703 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:15:32.923531  344703 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:15:32.924966  344703 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 23:15:32.926438  344703 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:15:32.927884  344703 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:15:32.929707  344703 config.go:182] Loaded profile config "calico-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:15:32.929827  344703 config.go:182] Loaded profile config "custom-flannel-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:15:32.929910  344703 config.go:182] Loaded profile config "default-k8s-diff-port-149888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:15:32.930001  344703 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:15:32.959816  344703 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:15:32.959929  344703 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:15:33.025827  344703 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-19 23:15:33.014755271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:15:33.025932  344703 docker.go:318] overlay module found
	I0919 23:15:33.028149  344703 out.go:179] * Using the docker driver based on user configuration
	I0919 23:15:33.030391  344703 start.go:304] selected driver: docker
	I0919 23:15:33.030414  344703 start.go:918] validating driver "docker" against <nil>
	I0919 23:15:33.030429  344703 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:15:33.031103  344703 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:15:33.099704  344703 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-19 23:15:33.086770268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:15:33.099875  344703 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E0919 23:15:33.100105  344703 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0919 23:15:33.100139  344703 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:15:33.102689  344703 out.go:179] * Using Docker driver with root privileges
	I0919 23:15:33.104073  344703 cni.go:84] Creating CNI manager for "bridge"
	I0919 23:15:33.104098  344703 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 23:15:33.104233  344703 start.go:348] cluster config:
	{Name:enable-default-cni-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:enable-default-cni-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:15:33.105990  344703 out.go:179] * Starting "enable-default-cni-896447" primary control-plane node in "enable-default-cni-896447" cluster
	I0919 23:15:33.107421  344703 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 23:15:33.108906  344703 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:15:33.110129  344703 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:15:33.110189  344703 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 23:15:33.110205  344703 cache.go:58] Caching tarball of preloaded images
	I0919 23:15:33.110222  344703 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:15:33.110313  344703 preload.go:172] Found /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 23:15:33.110327  344703 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0919 23:15:33.110457  344703 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/config.json ...
	I0919 23:15:33.110493  344703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/config.json: {Name:mk6e5425dbce9e674a343695a2d11340896d365f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:33.133744  344703 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:15:33.133777  344703 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:15:33.133798  344703 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:15:33.133824  344703 start.go:360] acquireMachinesLock for enable-default-cni-896447: {Name:mkcab8753a56cfe000149c538617f5edcdeaefe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:15:33.133927  344703 start.go:364] duration metric: took 84.85µs to acquireMachinesLock for "enable-default-cni-896447"
	I0919 23:15:33.133951  344703 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:enable-default-cni-896447 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:15:33.134030  344703 start.go:125] createHost starting for "" (driver="docker")
	I0919 23:15:32.367449  340598 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-149888 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:15:32.388831  340598 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0919 23:15:32.393565  340598 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:15:32.406851  340598 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-149888 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-149888 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:15:32.406977  340598 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:15:32.407027  340598 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:15:32.444869  340598 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:15:32.444893  340598 containerd.go:534] Images already preloaded, skipping extraction
	I0919 23:15:32.444955  340598 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:15:32.482815  340598 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:15:32.482841  340598 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:15:32.482849  340598 kubeadm.go:926] updating node { 192.168.103.2 8444 v1.34.0 containerd true true} ...
	I0919 23:15:32.482961  340598 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-149888 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-149888 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:15:32.483028  340598 ssh_runner.go:195] Run: sudo crictl info
	I0919 23:15:32.526763  340598 cni.go:84] Creating CNI manager for ""
	I0919 23:15:32.526793  340598 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 23:15:32.526810  340598 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:15:32.526846  340598 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-149888 NodeName:default-k8s-diff-port-149888 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube
/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:15:32.527018  340598 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-149888"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:15:32.527102  340598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:15:32.537472  340598 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:15:32.537543  340598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:15:32.547973  340598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (333 bytes)
	I0919 23:15:32.569211  340598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:15:32.590634  340598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2243 bytes)
	I0919 23:15:32.613200  340598 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:15:32.617432  340598 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:15:32.632368  340598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:15:32.708208  340598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:15:32.734102  340598 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/default-k8s-diff-port-149888 for IP: 192.168.103.2
	I0919 23:15:32.734124  340598 certs.go:194] generating shared ca certs ...
	I0919 23:15:32.734146  340598 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:32.734309  340598 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 23:15:32.734359  340598 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 23:15:32.734374  340598 certs.go:256] generating profile certs ...
	I0919 23:15:32.734479  340598 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/default-k8s-diff-port-149888/client.key
	I0919 23:15:32.734563  340598 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/default-k8s-diff-port-149888/apiserver.key.404e604f
	I0919 23:15:32.734614  340598 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/default-k8s-diff-port-149888/proxy-client.key
	I0919 23:15:32.734752  340598 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 23:15:32.734799  340598 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 23:15:32.734813  340598 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 23:15:32.734849  340598 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:15:32.734883  340598 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:15:32.734916  340598 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 23:15:32.734974  340598 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:15:32.735654  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:15:32.765344  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:15:32.798571  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:15:32.837531  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:15:32.877303  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/default-k8s-diff-port-149888/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 23:15:32.908620  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/default-k8s-diff-port-149888/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:15:32.939351  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/default-k8s-diff-port-149888/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:15:32.971241  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/default-k8s-diff-port-149888/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:15:33.007252  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:15:33.038467  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 23:15:33.073713  340598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 23:15:33.104422  340598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:15:33.125651  340598 ssh_runner.go:195] Run: openssl version
	I0919 23:15:33.132270  340598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:15:33.143613  340598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:33.148448  340598 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:33.148517  340598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:33.156362  340598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:15:33.167285  340598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 23:15:33.179868  340598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 23:15:33.184437  340598 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 23:15:33.184506  340598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 23:15:33.192662  340598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 23:15:33.203725  340598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 23:15:33.214834  340598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 23:15:33.219590  340598 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 23:15:33.219658  340598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 23:15:33.229560  340598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:15:33.241012  340598 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:15:33.245355  340598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 23:15:33.253344  340598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 23:15:33.261198  340598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 23:15:33.269077  340598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 23:15:33.277486  340598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 23:15:33.285630  340598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 23:15:33.294490  340598 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-149888 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-149888 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:15:33.294594  340598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 23:15:33.294648  340598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:15:33.366308  340598 cri.go:89] found id: "8fe3c9f630050a4562bdee872e3bd5b158ebb872b819c9704b64439e00342d40"
	I0919 23:15:33.366380  340598 cri.go:89] found id: "cf6b3300eb0813fdae69407769c3f6c2a181ed057592256e5d0484216657585d"
	I0919 23:15:33.366398  340598 cri.go:89] found id: "351f4368e8712652bd68f0bd0ebb515c4f49fef1d60d7f5a8189bd9bb301dfa1"
	I0919 23:15:33.366412  340598 cri.go:89] found id: "fc26366126b18bc013992c759f1ace9b13c7b3a4d0bf6ba034cf10b8bc295925"
	I0919 23:15:33.366425  340598 cri.go:89] found id: "bbfb1c954fb1034180e24edeaa8f8df98c52266fc3bff9938f32230a087e7bf7"
	I0919 23:15:33.366438  340598 cri.go:89] found id: "c43b276ad64808b3638f48fb95a466e4ac5a6ca6b0f2e698462337fbab846497"
	I0919 23:15:33.366451  340598 cri.go:89] found id: "c2e3a7b89e4703676da0d2bd9bc89da04f199a71876c7e42f6ed8afbc9fd9473"
	I0919 23:15:33.366483  340598 cri.go:89] found id: "6cb08d2f210eda6eb6b104b96ac64e816b7fab2dd877c455b3d32f16fa032f13"
	I0919 23:15:33.366505  340598 cri.go:89] found id: ""
	I0919 23:15:33.366562  340598 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0919 23:15:33.392017  340598 cri.go:116] JSON = [{"ociVersion":"1.2.0","id":"e51828479176cf200621b9413a84782d1fd36bebb818cf4d308e648916728f07","pid":872,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e51828479176cf200621b9413a84782d1fd36bebb818cf4d308e648916728f07","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e51828479176cf200621b9413a84782d1fd36bebb818cf4d308e648916728f07/rootfs","created":"2025-09-19T23:15:33.381068889Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"e51828479176cf200621b9413a84782d1fd36bebb818cf4d308e648916728f07","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-diff-port-149888_8ea67c8a9090832adce3801a31c5da22","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-defaul
t-k8s-diff-port-149888","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8ea67c8a9090832adce3801a31c5da22"},"owner":"root"}]
	I0919 23:15:33.392106  340598 cri.go:126] list returned 1 containers
	I0919 23:15:33.392131  340598 cri.go:129] container: {ID:e51828479176cf200621b9413a84782d1fd36bebb818cf4d308e648916728f07 Status:created}
	I0919 23:15:33.392188  340598 cri.go:131] skipping e51828479176cf200621b9413a84782d1fd36bebb818cf4d308e648916728f07 - not in ps
	I0919 23:15:33.392249  340598 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:15:33.409003  340598 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 23:15:33.409030  340598 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 23:15:33.409092  340598 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 23:15:33.425910  340598 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 23:15:33.426750  340598 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-149888" does not appear in /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:15:33.427828  340598 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14678/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-149888" cluster setting kubeconfig missing "default-k8s-diff-port-149888" context setting]
	I0919 23:15:33.428991  340598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:33.431337  340598 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 23:15:33.447295  340598 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.103.2
	I0919 23:15:33.447376  340598 kubeadm.go:593] duration metric: took 38.338055ms to restartPrimaryControlPlane
	I0919 23:15:33.447407  340598 kubeadm.go:394] duration metric: took 152.910304ms to StartCluster
	I0919 23:15:33.447429  340598 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:33.447536  340598 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:15:33.448796  340598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:33.450821  340598 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:15:33.451317  340598 config.go:182] Loaded profile config "default-k8s-diff-port-149888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:15:33.451018  340598 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:15:33.451558  340598 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-149888"
	I0919 23:15:33.451588  340598 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-149888"
	W0919 23:15:33.451601  340598 addons.go:247] addon storage-provisioner should already be in state true
	I0919 23:15:33.451632  340598 host.go:66] Checking if "default-k8s-diff-port-149888" exists ...
	I0919 23:15:33.452146  340598 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:15:33.452347  340598 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-149888"
	I0919 23:15:33.452369  340598 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-149888"
	I0919 23:15:33.452398  340598 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-149888"
	I0919 23:15:33.452610  340598 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-149888"
	I0919 23:15:33.452634  340598 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-149888"
	W0919 23:15:33.452644  340598 addons.go:247] addon metrics-server should already be in state true
	I0919 23:15:33.452665  340598 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:15:33.452687  340598 host.go:66] Checking if "default-k8s-diff-port-149888" exists ...
	I0919 23:15:33.452795  340598 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-149888"
	W0919 23:15:33.452812  340598 addons.go:247] addon dashboard should already be in state true
	I0919 23:15:33.452835  340598 out.go:179] * Verifying Kubernetes components...
	I0919 23:15:33.452844  340598 host.go:66] Checking if "default-k8s-diff-port-149888" exists ...
	I0919 23:15:33.453127  340598 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:15:33.453439  340598 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:15:33.454721  340598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:15:33.492811  340598 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 23:15:33.493845  340598 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-149888"
	W0919 23:15:33.493870  340598 addons.go:247] addon default-storageclass should already be in state true
	I0919 23:15:33.493899  340598 host.go:66] Checking if "default-k8s-diff-port-149888" exists ...
	I0919 23:15:33.494576  340598 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 23:15:33.494869  340598 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 23:15:33.494962  340598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-149888
	I0919 23:15:33.495121  340598 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-149888 --format={{.State.Status}}
	I0919 23:15:33.496492  340598 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0919 23:15:33.498336  340598 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0919 23:15:33.499991  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0919 23:15:33.500011  340598 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0919 23:15:33.500072  340598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-149888
	I0919 23:15:33.506383  340598 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:15:30.071091  337922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.crt.e8405b3a ...
	I0919 23:15:30.071121  337922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.crt.e8405b3a: {Name:mkfc6d9fb70774e93edea0f30068f954d770e855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:30.071305  337922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.key.e8405b3a ...
	I0919 23:15:30.071323  337922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.key.e8405b3a: {Name:mkf2bc5573d0fadb539f07c387914ddabda7e1e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:30.071429  337922 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.crt.e8405b3a -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.crt
	I0919 23:15:30.071553  337922 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.key.e8405b3a -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.key
	I0919 23:15:30.071647  337922 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/proxy-client.key
	I0919 23:15:30.071669  337922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/proxy-client.crt with IP's: []
	I0919 23:15:30.329852  337922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/proxy-client.crt ...
	I0919 23:15:30.329880  337922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/proxy-client.crt: {Name:mkf4ed8753967f71e4fee5b648f600e5521ad677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:30.330033  337922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/proxy-client.key ...
	I0919 23:15:30.330046  337922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/proxy-client.key: {Name:mk83701e5fc5bd0781830011816d1b3c9031d60f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:30.330250  337922 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 23:15:30.330300  337922 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 23:15:30.330314  337922 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 23:15:30.330345  337922 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:15:30.330382  337922 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:15:30.330412  337922 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 23:15:30.330482  337922 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:15:30.331252  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:15:30.362599  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:15:30.391704  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:15:30.421576  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:15:30.450400  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 23:15:30.480058  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:15:30.508820  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:15:30.537403  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/custom-flannel-896447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:15:30.566342  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 23:15:30.602719  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:15:30.633417  337922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 23:15:30.667352  337922 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:15:30.702068  337922 ssh_runner.go:195] Run: openssl version
	I0919 23:15:30.715192  337922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 23:15:30.729426  337922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 23:15:30.734175  337922 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 23:15:30.734234  337922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 23:15:30.742314  337922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:15:30.754034  337922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:15:30.766190  337922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:30.770790  337922 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:30.770854  337922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:30.779014  337922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:15:30.791960  337922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 23:15:30.804248  337922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 23:15:30.809303  337922 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 23:15:30.809370  337922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 23:15:30.819118  337922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 23:15:30.833413  337922 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:15:30.837665  337922 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:15:30.837733  337922 kubeadm.go:392] StartCluster: {Name:custom-flannel-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:custom-flannel-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:15:30.837821  337922 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 23:15:30.837894  337922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:15:30.882097  337922 cri.go:89] found id: ""
	I0919 23:15:30.882214  337922 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:15:30.892431  337922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:15:30.902347  337922 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:15:30.902399  337922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:15:30.912724  337922 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:15:30.912748  337922 kubeadm.go:157] found existing configuration files:
	
	I0919 23:15:30.912797  337922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:15:30.922795  337922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:15:30.922860  337922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:15:30.933361  337922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:15:30.943701  337922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:15:30.943770  337922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:15:30.954343  337922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:15:30.964250  337922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:15:30.964301  337922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:15:30.974023  337922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:15:30.984022  337922 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:15:30.984084  337922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:15:30.994536  337922 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:15:31.056589  337922 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:15:31.120263  337922 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:15:33.509955  340598 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:15:33.509979  340598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:15:33.510053  340598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-149888
	I0919 23:15:33.535133  340598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/default-k8s-diff-port-149888/id_rsa Username:docker}
	I0919 23:15:33.541327  340598 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:15:33.541357  340598 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:15:33.541425  340598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-149888
	I0919 23:15:33.549091  340598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/default-k8s-diff-port-149888/id_rsa Username:docker}
	I0919 23:15:33.557730  340598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/default-k8s-diff-port-149888/id_rsa Username:docker}
	I0919 23:15:33.570282  340598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/default-k8s-diff-port-149888/id_rsa Username:docker}
	I0919 23:15:33.637116  340598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:15:33.670007  340598 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-149888" to be "Ready" ...
	I0919 23:15:33.705757  340598 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 23:15:33.705784  340598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 23:15:33.708629  340598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:15:33.711304  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0919 23:15:33.711377  340598 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0919 23:15:33.711803  340598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:15:33.765034  340598 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 23:15:33.765058  340598 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 23:15:33.792079  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0919 23:15:33.792113  340598 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0919 23:15:33.836250  340598 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:15:33.836277  340598 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 23:15:33.855571  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0919 23:15:33.855625  340598 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0919 23:15:33.876889  340598 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:15:33.876929  340598 retry.go:31] will retry after 335.346116ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:15:33.895379  340598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:15:33.921949  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0919 23:15:33.921980  340598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0919 23:15:33.952086  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0919 23:15:33.952110  340598 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0919 23:15:33.980888  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0919 23:15:33.980911  340598 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0919 23:15:34.009150  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0919 23:15:34.009201  340598 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0919 23:15:34.036022  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0919 23:15:34.036047  340598 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0919 23:15:34.059314  340598 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:15:34.059340  340598 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0919 23:15:34.084023  340598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:15:34.212579  340598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:15:36.139946  340598 node_ready.go:49] node "default-k8s-diff-port-149888" is "Ready"
	I0919 23:15:36.139981  340598 node_ready.go:38] duration metric: took 2.469918832s for node "default-k8s-diff-port-149888" to be "Ready" ...
	I0919 23:15:36.139998  340598 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:15:36.140070  340598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:15:37.048579  340598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.3367104s)
	I0919 23:15:37.141561  340598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.246135134s)
	I0919 23:15:37.141615  340598 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-149888"
	I0919 23:15:37.747829  340598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.663746718s)
	I0919 23:15:37.747891  340598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (3.535272195s)
	I0919 23:15:37.747945  340598 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.607851251s)
	I0919 23:15:37.747974  340598 api_server.go:72] duration metric: took 4.297109416s to wait for apiserver process to appear ...
	I0919 23:15:37.747982  340598 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:15:37.748003  340598 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0919 23:15:37.752423  340598 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:15:37.752453  340598 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:15:37.777526  340598 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-149888 addons enable metrics-server
	
	I0919 23:15:33.136328  344703 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 23:15:33.136572  344703 start.go:159] libmachine.API.Create for "enable-default-cni-896447" (driver="docker")
	I0919 23:15:33.136603  344703 client.go:168] LocalClient.Create starting
	I0919 23:15:33.136657  344703 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem
	I0919 23:15:33.136694  344703 main.go:141] libmachine: Decoding PEM data...
	I0919 23:15:33.136708  344703 main.go:141] libmachine: Parsing certificate...
	I0919 23:15:33.136769  344703 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem
	I0919 23:15:33.136791  344703 main.go:141] libmachine: Decoding PEM data...
	I0919 23:15:33.136801  344703 main.go:141] libmachine: Parsing certificate...
	I0919 23:15:33.137109  344703 cli_runner.go:164] Run: docker network inspect enable-default-cni-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 23:15:33.156264  344703 cli_runner.go:211] docker network inspect enable-default-cni-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 23:15:33.156338  344703 network_create.go:284] running [docker network inspect enable-default-cni-896447] to gather additional debugging logs...
	I0919 23:15:33.156362  344703 cli_runner.go:164] Run: docker network inspect enable-default-cni-896447
	W0919 23:15:33.177325  344703 cli_runner.go:211] docker network inspect enable-default-cni-896447 returned with exit code 1
	I0919 23:15:33.177361  344703 network_create.go:287] error running [docker network inspect enable-default-cni-896447]: docker network inspect enable-default-cni-896447: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-896447 not found
	I0919 23:15:33.177389  344703 network_create.go:289] output of [docker network inspect enable-default-cni-896447]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-896447 not found
	
	** /stderr **
	I0919 23:15:33.177576  344703 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:15:33.198411  344703 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-465af21e2d8d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:b5:e0:20:10:48} reservation:<nil>}
	I0919 23:15:33.198985  344703 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd9dd989b948 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:ff:32:51:a1:28} reservation:<nil>}
	I0919 23:15:33.199671  344703 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d5cd7c460be9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:d8:0f:d3:49:b9} reservation:<nil>}
	I0919 23:15:33.200592  344703 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b5faa0}
	I0919 23:15:33.200640  344703 network_create.go:124] attempt to create docker network enable-default-cni-896447 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0919 23:15:33.200698  344703 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-896447 enable-default-cni-896447
	I0919 23:15:33.267497  344703 network_create.go:108] docker network enable-default-cni-896447 192.168.76.0/24 created
	I0919 23:15:33.267532  344703 kic.go:121] calculated static IP "192.168.76.2" for the "enable-default-cni-896447" container
	I0919 23:15:33.267604  344703 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 23:15:33.290626  344703 cli_runner.go:164] Run: docker volume create enable-default-cni-896447 --label name.minikube.sigs.k8s.io=enable-default-cni-896447 --label created_by.minikube.sigs.k8s.io=true
	I0919 23:15:33.317669  344703 oci.go:103] Successfully created a docker volume enable-default-cni-896447
	I0919 23:15:33.317796  344703 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-896447-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-896447 --entrypoint /usr/bin/test -v enable-default-cni-896447:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 23:15:33.931755  344703 oci.go:107] Successfully prepared a docker volume enable-default-cni-896447
	I0919 23:15:33.931796  344703 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:15:33.931818  344703 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 23:15:33.931893  344703 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-896447:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 23:15:37.942618  340598 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0919 23:15:37.985625  340598 addons.go:514] duration metric: took 4.534584363s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0919 23:15:38.248802  340598 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0919 23:15:38.253533  340598 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:15:38.253564  340598 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:15:38.748181  340598 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0919 23:15:38.752478  340598 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:15:38.752549  340598 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:15:39.248627  340598 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0919 23:15:39.255957  340598 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:15:39.255985  340598 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:15:39.748144  340598 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0919 23:15:39.756623  340598 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0919 23:15:39.758067  340598 api_server.go:141] control plane version: v1.34.0
	I0919 23:15:39.758094  340598 api_server.go:131] duration metric: took 2.010104144s to wait for apiserver health ...
	I0919 23:15:39.758104  340598 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:15:39.767321  340598 system_pods.go:59] 9 kube-system pods found
	I0919 23:15:39.767423  340598 system_pods.go:61] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:39.767447  340598 system_pods.go:61] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:39.767496  340598 system_pods.go:61] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 23:15:39.767522  340598 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:39.767542  340598 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:15:39.767577  340598 system_pods.go:61] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:39.767587  340598 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:39.767595  340598 system_pods.go:61] "metrics-server-746fcd58dc-hskrc" [40d8858a-a2a6-4ecb-a444-fc51fc311b46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:15:39.767602  340598 system_pods.go:61] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:15:39.767609  340598 system_pods.go:74] duration metric: took 9.499039ms to wait for pod list to return data ...
	I0919 23:15:39.767619  340598 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:15:39.773053  340598 default_sa.go:45] found service account: "default"
	I0919 23:15:39.773192  340598 default_sa.go:55] duration metric: took 5.56416ms for default service account to be created ...
	I0919 23:15:39.773230  340598 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:15:39.777639  340598 system_pods.go:86] 9 kube-system pods found
	I0919 23:15:39.777729  340598 system_pods.go:89] "coredns-66bc5c9577-qj565" [2af6b5ca-5b32-423d-8ab6-3db69036da7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:39.777742  340598 system_pods.go:89] "etcd-default-k8s-diff-port-149888" [1c238345-9acc-4ebb-b8d6-e0ef5786634a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:39.777751  340598 system_pods.go:89] "kindnet-4nqpl" [6f7ace36-28b0-40f3-b639-e63bad2e11fe] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 23:15:39.777761  340598 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-149888" [c910a182-65c5-4aa6-9379-7c28dd2d16ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:39.777769  340598 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-149888" [249ff0f4-e231-4b64-8d94-00ab31d4c7c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:15:39.777778  340598 system_pods.go:89] "kube-proxy-txcms" [a1f8cdea-e855-4a30-9130-a11d4b74ef54] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:39.777786  340598 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-149888" [6a51c7b1-2bbc-40a2-babc-88c05002bf89] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:39.777793  340598 system_pods.go:89] "metrics-server-746fcd58dc-hskrc" [40d8858a-a2a6-4ecb-a444-fc51fc311b46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:15:39.777800  340598 system_pods.go:89] "storage-provisioner" [9d4ae298-d2d0-4b50-9735-bc7eee3e4392] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:15:39.777809  340598 system_pods.go:126] duration metric: took 4.571567ms to wait for k8s-apps to be running ...
	I0919 23:15:39.777819  340598 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:15:39.777867  340598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:15:39.800350  340598 system_svc.go:56] duration metric: took 22.521873ms WaitForService to wait for kubelet
	I0919 23:15:39.800395  340598 kubeadm.go:578] duration metric: took 6.349514332s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:15:39.800416  340598 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:15:39.804858  340598 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 23:15:39.804890  340598 node_conditions.go:123] node cpu capacity is 8
	I0919 23:15:39.804909  340598 node_conditions.go:105] duration metric: took 4.487437ms to run NodePressure ...
	I0919 23:15:39.804923  340598 start.go:241] waiting for startup goroutines ...
	I0919 23:15:39.804931  340598 start.go:246] waiting for cluster config update ...
	I0919 23:15:39.804946  340598 start.go:255] writing updated cluster config ...
	I0919 23:15:39.805410  340598 ssh_runner.go:195] Run: rm -f paused
	I0919 23:15:39.811083  340598 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:15:39.819548  340598 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qj565" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:15:38.942431  344703 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-896447:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (5.010480573s)
	I0919 23:15:38.942491  344703 kic.go:203] duration metric: took 5.01066921s to extract preloaded images to volume ...
	W0919 23:15:38.942626  344703 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 23:15:38.942665  344703 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 23:15:38.942818  344703 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 23:15:39.062635  344703 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-896447 --name enable-default-cni-896447 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-896447 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-896447 --network enable-default-cni-896447 --ip 192.168.76.2 --volume enable-default-cni-896447:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 23:15:39.596204  344703 cli_runner.go:164] Run: docker container inspect enable-default-cni-896447 --format={{.State.Running}}
	I0919 23:15:39.624427  344703 cli_runner.go:164] Run: docker container inspect enable-default-cni-896447 --format={{.State.Status}}
	I0919 23:15:39.651352  344703 cli_runner.go:164] Run: docker exec enable-default-cni-896447 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:15:39.725132  344703 oci.go:144] the created container "enable-default-cni-896447" has a running status.
	I0919 23:15:39.725203  344703 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/enable-default-cni-896447/id_rsa...
	I0919 23:15:40.104143  344703 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14678/.minikube/machines/enable-default-cni-896447/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:15:40.147711  344703 cli_runner.go:164] Run: docker container inspect enable-default-cni-896447 --format={{.State.Status}}
	I0919 23:15:40.184441  344703 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:15:40.184472  344703 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-896447 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:15:40.258705  344703 cli_runner.go:164] Run: docker container inspect enable-default-cni-896447 --format={{.State.Status}}
	I0919 23:15:40.289562  344703 machine.go:93] provisionDockerMachine start ...
	I0919 23:15:40.289769  344703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-896447
	I0919 23:15:40.323022  344703 main.go:141] libmachine: Using SSH client type: native
	I0919 23:15:40.323586  344703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0919 23:15:40.323603  344703 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:15:40.495578  344703 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-896447
	
	I0919 23:15:40.495610  344703 ubuntu.go:182] provisioning hostname "enable-default-cni-896447"
	I0919 23:15:40.495703  344703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-896447
	I0919 23:15:40.521149  344703 main.go:141] libmachine: Using SSH client type: native
	I0919 23:15:40.521505  344703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0919 23:15:40.521526  344703 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-896447 && echo "enable-default-cni-896447" | sudo tee /etc/hostname
	I0919 23:15:40.687047  344703 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-896447
	
	I0919 23:15:40.687219  344703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-896447
	I0919 23:15:40.712076  344703 main.go:141] libmachine: Using SSH client type: native
	I0919 23:15:40.712364  344703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0919 23:15:40.712392  344703 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-896447' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-896447/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-896447' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:15:40.864360  344703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:15:40.864399  344703 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14678/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14678/.minikube}
	I0919 23:15:40.864443  344703 ubuntu.go:190] setting up certificates
	I0919 23:15:40.864456  344703 provision.go:84] configureAuth start
	I0919 23:15:40.864515  344703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-896447
	I0919 23:15:40.886435  344703 provision.go:143] copyHostCerts
	I0919 23:15:40.886496  344703 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem, removing ...
	I0919 23:15:40.886504  344703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem
	I0919 23:15:40.886569  344703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/ca.pem (1082 bytes)
	I0919 23:15:40.886684  344703 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem, removing ...
	I0919 23:15:40.886697  344703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem
	I0919 23:15:40.886731  344703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/cert.pem (1123 bytes)
	I0919 23:15:40.886829  344703 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem, removing ...
	I0919 23:15:40.886837  344703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem
	I0919 23:15:40.886872  344703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14678/.minikube/key.pem (1675 bytes)
	I0919 23:15:40.886965  344703 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-896447 san=[127.0.0.1 192.168.76.2 enable-default-cni-896447 localhost minikube]
	I0919 23:15:41.136936  344703 provision.go:177] copyRemoteCerts
	I0919 23:15:41.137035  344703 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:15:41.137087  344703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-896447
	I0919 23:15:41.157567  344703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/enable-default-cni-896447/id_rsa Username:docker}
	I0919 23:15:41.265236  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 23:15:41.303097  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0919 23:15:41.334135  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:15:41.369576  344703 provision.go:87] duration metric: took 505.106ms to configureAuth
	I0919 23:15:41.369608  344703 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:15:41.369841  344703 config.go:182] Loaded profile config "enable-default-cni-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:15:41.369851  344703 machine.go:96] duration metric: took 1.080192174s to provisionDockerMachine
	I0919 23:15:41.369859  344703 client.go:171] duration metric: took 8.2332502s to LocalClient.Create
	I0919 23:15:41.369882  344703 start.go:167] duration metric: took 8.233310858s to libmachine.API.Create "enable-default-cni-896447"
	I0919 23:15:41.369890  344703 start.go:293] postStartSetup for "enable-default-cni-896447" (driver="docker")
	I0919 23:15:41.369904  344703 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:15:41.369967  344703 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:15:41.370011  344703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-896447
	I0919 23:15:41.399243  344703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/enable-default-cni-896447/id_rsa Username:docker}
	I0919 23:15:41.511990  344703 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:15:41.518841  344703 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:15:41.518868  344703 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:15:41.518881  344703 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:15:41.518887  344703 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:15:41.518898  344703 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/addons for local assets ...
	I0919 23:15:41.518945  344703 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14678/.minikube/files for local assets ...
	I0919 23:15:41.519016  344703 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem -> 182102.pem in /etc/ssl/certs
	I0919 23:15:41.519103  344703 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:15:41.539376  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:15:41.590663  344703 start.go:296] duration metric: took 220.753633ms for postStartSetup
	I0919 23:15:41.591115  344703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-896447
	I0919 23:15:41.612462  344703 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/config.json ...
	I0919 23:15:41.612743  344703 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:15:41.612787  344703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-896447
	I0919 23:15:41.634405  344703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/enable-default-cni-896447/id_rsa Username:docker}
	I0919 23:15:41.732247  344703 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:15:41.739671  344703 start.go:128] duration metric: took 8.605598386s to createHost
	I0919 23:15:41.739701  344703 start.go:83] releasing machines lock for "enable-default-cni-896447", held for 8.605762338s
	I0919 23:15:41.739807  344703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-896447
	I0919 23:15:41.768142  344703 ssh_runner.go:195] Run: cat /version.json
	I0919 23:15:41.768227  344703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-896447
	I0919 23:15:41.768250  344703 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:15:41.768312  344703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-896447
	I0919 23:15:41.795698  344703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/enable-default-cni-896447/id_rsa Username:docker}
	I0919 23:15:41.795853  344703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/enable-default-cni-896447/id_rsa Username:docker}
	I0919 23:15:42.005044  344703 ssh_runner.go:195] Run: systemctl --version
	I0919 23:15:42.010329  344703 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:15:42.015411  344703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:15:42.050101  344703 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:15:42.050203  344703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:15:42.085998  344703 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:15:42.086026  344703 start.go:495] detecting cgroup driver to use...
	I0919 23:15:42.086064  344703 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:15:42.086118  344703 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 23:15:42.101407  344703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:15:42.117471  344703 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:15:42.117534  344703 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:15:42.132604  344703 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:15:42.148842  344703 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:15:42.251216  344703 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:15:42.333460  344703 docker.go:234] disabling docker service ...
	I0919 23:15:42.333589  344703 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:15:42.355374  344703 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:15:42.372177  344703 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:15:42.451512  344703 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:15:42.533323  344703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:15:42.551510  344703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:15:42.578571  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:15:42.595946  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:15:42.615207  344703 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:15:42.615419  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:15:42.633805  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:15:42.648976  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:15:42.667228  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:15:42.682617  344703 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:15:42.696113  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:15:42.709911  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:15:42.724511  344703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:15:42.739136  344703 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:15:42.752291  344703 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:15:42.764737  344703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:15:42.856585  344703 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:15:43.013637  344703 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0919 23:15:43.013734  344703 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0919 23:15:43.019593  344703 start.go:563] Will wait 60s for crictl version
	I0919 23:15:43.019664  344703 ssh_runner.go:195] Run: which crictl
	I0919 23:15:43.025226  344703 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:15:43.082693  344703 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0919 23:15:43.082822  344703 ssh_runner.go:195] Run: containerd --version
	I0919 23:15:43.122943  344703 ssh_runner.go:195] Run: containerd --version
	I0919 23:15:43.160753  344703 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0919 23:15:45.444732  337922 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:15:45.444807  337922 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:15:45.444930  337922 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:15:45.445030  337922 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:15:45.445115  337922 kubeadm.go:310] OS: Linux
	I0919 23:15:45.445229  337922 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:15:45.445299  337922 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:15:45.445359  337922 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:15:45.445468  337922 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:15:45.445547  337922 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:15:45.445632  337922 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:15:45.445715  337922 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:15:45.445805  337922 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:15:45.445916  337922 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:15:45.446121  337922 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:15:45.446274  337922 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:15:45.446373  337922 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:15:45.450361  337922 out.go:252]   - Generating certificates and keys ...
	I0919 23:15:45.450567  337922 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:15:45.450684  337922 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:15:45.450781  337922 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:15:45.450855  337922 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:15:45.450932  337922 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:15:45.451004  337922 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:15:45.451067  337922 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:15:45.451364  337922 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-896447 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0919 23:15:45.451462  337922 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:15:45.451660  337922 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-896447 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0919 23:15:45.451755  337922 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:15:45.451843  337922 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:15:45.451905  337922 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:15:45.451989  337922 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:15:45.452057  337922 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:15:45.452136  337922 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:15:45.452258  337922 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:15:45.452362  337922 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:15:45.452481  337922 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:15:45.452599  337922 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:15:45.452710  337922 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:15:45.454716  337922 out.go:252]   - Booting up control plane ...
	I0919 23:15:45.454814  337922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:15:45.454910  337922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:15:45.455028  337922 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:15:45.455208  337922 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:15:45.455333  337922 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:15:45.455492  337922 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:15:45.455622  337922 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:15:45.455680  337922 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:15:45.455872  337922 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:15:45.456030  337922 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:15:45.456117  337922 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501123072s
	I0919 23:15:45.456251  337922 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:15:45.456321  337922 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0919 23:15:45.456482  337922 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:15:45.456610  337922 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:15:45.456748  337922 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.504380857s
	I0919 23:15:45.456870  337922 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.069098705s
	I0919 23:15:45.456967  337922 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.502775889s
	I0919 23:15:45.457113  337922 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:15:45.457303  337922 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:15:45.457393  337922 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:15:45.457756  337922 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-896447 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:15:45.457869  337922 kubeadm.go:310] [bootstrap-token] Using token: ldywn1.hhm1ey7n54hgdxgs
	I0919 23:15:45.460976  337922 out.go:252]   - Configuring RBAC rules ...
	I0919 23:15:45.461111  337922 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:15:45.461270  337922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:15:45.461689  337922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:15:45.461873  337922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:15:45.462030  337922 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:15:45.462302  337922 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:15:45.462519  337922 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:15:45.462596  337922 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:15:45.462668  337922 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:15:45.462681  337922 kubeadm.go:310] 
	I0919 23:15:45.462782  337922 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:15:45.462793  337922 kubeadm.go:310] 
	I0919 23:15:45.462910  337922 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:15:45.462922  337922 kubeadm.go:310] 
	I0919 23:15:45.462959  337922 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:15:45.463052  337922 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:15:45.463193  337922 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:15:45.463222  337922 kubeadm.go:310] 
	I0919 23:15:45.463324  337922 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:15:45.463348  337922 kubeadm.go:310] 
	I0919 23:15:45.463422  337922 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:15:45.463433  337922 kubeadm.go:310] 
	I0919 23:15:45.463500  337922 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:15:45.463616  337922 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:15:45.463713  337922 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:15:45.463746  337922 kubeadm.go:310] 
	I0919 23:15:45.463858  337922 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:15:45.463973  337922 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:15:45.463978  337922 kubeadm.go:310] 
	I0919 23:15:45.464088  337922 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ldywn1.hhm1ey7n54hgdxgs \
	I0919 23:15:45.464244  337922 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 \
	I0919 23:15:45.464283  337922 kubeadm.go:310] 	--control-plane 
	I0919 23:15:45.464308  337922 kubeadm.go:310] 
	I0919 23:15:45.464454  337922 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:15:45.464478  337922 kubeadm.go:310] 
	I0919 23:15:45.464599  337922 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ldywn1.hhm1ey7n54hgdxgs \
	I0919 23:15:45.464766  337922 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dae3dd920fb027024a058f7784382f806dfdbf0483a893c299b72dd41dc8aff6 
	I0919 23:15:45.464797  337922 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0919 23:15:45.470347  337922 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	W0919 23:15:41.828842  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:15:44.326627  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	I0919 23:15:42.822304  326932 system_pods.go:86] 9 kube-system pods found
	I0919 23:15:42.822344  326932 system_pods.go:89] "calico-kube-controllers-59556d9b4c-2fjg9" [8dc0b3b6-f557-4ad1-85ae-60ddc48950a7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0919 23:15:42.822357  326932 system_pods.go:89] "calico-node-r2l7z" [ebd611c4-409f-4cc4-8508-4c5892b758d8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0919 23:15:42.822366  326932 system_pods.go:89] "coredns-66bc5c9577-dldb4" [a59ee5bf-734d-4164-896a-639bb683ff7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:42.822372  326932 system_pods.go:89] "etcd-calico-896447" [41123509-0390-4020-808b-bce50ffb6c4f] Running
	I0919 23:15:42.822381  326932 system_pods.go:89] "kube-apiserver-calico-896447" [25b92dfb-471c-422d-998e-cde4285e1283] Running
	I0919 23:15:42.822387  326932 system_pods.go:89] "kube-controller-manager-calico-896447" [42b190c3-2f24-4dd1-8b90-58c440546dc7] Running
	I0919 23:15:42.822394  326932 system_pods.go:89] "kube-proxy-fwxkb" [634c6d7b-ea71-4998-b7ac-c792099dadd4] Running
	I0919 23:15:42.822399  326932 system_pods.go:89] "kube-scheduler-calico-896447" [e005a269-f2a5-41f2-b518-57da14a4830f] Running
	I0919 23:15:42.822404  326932 system_pods.go:89] "storage-provisioner" [0a6daeb2-d50a-4464-86cc-1f501ec95e29] Running
	I0919 23:15:42.822427  326932 retry.go:31] will retry after 11.166139539s: missing components: kube-dns
	I0919 23:15:43.162717  344703 cli_runner.go:164] Run: docker network inspect enable-default-cni-896447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:15:43.189336  344703 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0919 23:15:43.195787  344703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:15:43.215896  344703 kubeadm.go:875] updating cluster {Name:enable-default-cni-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:enable-default-cni-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:15:43.216029  344703 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 23:15:43.216092  344703 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:15:43.274643  344703 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:15:43.274669  344703 containerd.go:534] Images already preloaded, skipping extraction
	I0919 23:15:43.274857  344703 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:15:43.329488  344703 containerd.go:627] all images are preloaded for containerd runtime.
	I0919 23:15:43.329511  344703 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:15:43.329522  344703 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 containerd true true} ...
	I0919 23:15:43.329701  344703 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-896447 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:enable-default-cni-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0919 23:15:43.329783  344703 ssh_runner.go:195] Run: sudo crictl info
	I0919 23:15:43.388791  344703 cni.go:84] Creating CNI manager for "bridge"
	I0919 23:15:43.388827  344703 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:15:43.388861  344703 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-896447 NodeName:enable-default-cni-896447 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:15:43.389031  344703 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "enable-default-cni-896447"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:15:43.389120  344703 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:15:43.407392  344703 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:15:43.407477  344703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:15:43.422344  344703 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I0919 23:15:43.454990  344703 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:15:43.490771  344703 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I0919 23:15:43.519250  344703 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:15:43.525355  344703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:15:43.543972  344703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:15:43.643376  344703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:15:43.668514  344703 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447 for IP: 192.168.76.2
	I0919 23:15:43.668538  344703 certs.go:194] generating shared ca certs ...
	I0919 23:15:43.668557  344703 certs.go:226] acquiring lock for ca certs: {Name:mkd7a2e112725f042a76c7be63aef486d6b9bff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:43.668717  344703 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key
	I0919 23:15:43.668774  344703 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key
	I0919 23:15:43.668789  344703 certs.go:256] generating profile certs ...
	I0919 23:15:43.668860  344703 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/client.key
	I0919 23:15:43.668875  344703 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/client.crt with IP's: []
	I0919 23:15:43.805813  344703 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/client.crt ...
	I0919 23:15:43.805853  344703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/client.crt: {Name:mk0464640540612b6e74686b161438202613fde1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:43.806051  344703 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/client.key ...
	I0919 23:15:43.806068  344703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/client.key: {Name:mk367a6ea97357b56137acda36c4237b57e3c702 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:43.806203  344703 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.key.6c00c855
	I0919 23:15:43.806238  344703 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.crt.6c00c855 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0919 23:15:44.175596  344703 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.crt.6c00c855 ...
	I0919 23:15:44.175631  344703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.crt.6c00c855: {Name:mk4630c6f3a2421136b44dcafd50c85aef43ff7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:44.175790  344703 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.key.6c00c855 ...
	I0919 23:15:44.175807  344703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.key.6c00c855: {Name:mkb0e04733d4a30de7229c750f6ce228e8f90973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:44.175905  344703 certs.go:381] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.crt.6c00c855 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.crt
	I0919 23:15:44.175984  344703 certs.go:385] copying /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.key.6c00c855 -> /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.key
	I0919 23:15:44.176039  344703 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/proxy-client.key
	I0919 23:15:44.176054  344703 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/proxy-client.crt with IP's: []
	I0919 23:15:44.350228  344703 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/proxy-client.crt ...
	I0919 23:15:44.350258  344703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/proxy-client.crt: {Name:mk72ed81b89d754d3b39a97ac213a8202ef5300b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:44.350416  344703 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/proxy-client.key ...
	I0919 23:15:44.350430  344703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/proxy-client.key: {Name:mk6e74cf487510ceb651f1076f9f57fc7e73562b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:44.350629  344703 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem (1338 bytes)
	W0919 23:15:44.350679  344703 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210_empty.pem, impossibly tiny 0 bytes
	I0919 23:15:44.350696  344703 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 23:15:44.350728  344703 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:15:44.350763  344703 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:15:44.350810  344703 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/certs/key.pem (1675 bytes)
	I0919 23:15:44.350869  344703 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem (1708 bytes)
	I0919 23:15:44.351521  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:15:44.383087  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:15:44.416676  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:15:44.449964  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:15:44.488929  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 23:15:44.521668  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 23:15:44.553975  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:15:44.593411  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/enable-default-cni-896447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 23:15:44.636337  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/certs/18210.pem --> /usr/share/ca-certificates/18210.pem (1338 bytes)
	I0919 23:15:44.679488  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/ssl/certs/182102.pem --> /usr/share/ca-certificates/182102.pem (1708 bytes)
	I0919 23:15:44.722503  344703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:15:44.763203  344703 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:15:44.791868  344703 ssh_runner.go:195] Run: openssl version
	I0919 23:15:44.799596  344703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18210.pem && ln -fs /usr/share/ca-certificates/18210.pem /etc/ssl/certs/18210.pem"
	I0919 23:15:44.813637  344703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18210.pem
	I0919 23:15:44.820636  344703 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18210.pem
	I0919 23:15:44.820916  344703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18210.pem
	I0919 23:15:44.832739  344703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18210.pem /etc/ssl/certs/51391683.0"
	I0919 23:15:44.848935  344703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182102.pem && ln -fs /usr/share/ca-certificates/182102.pem /etc/ssl/certs/182102.pem"
	I0919 23:15:44.862800  344703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182102.pem
	I0919 23:15:44.868564  344703 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/182102.pem
	I0919 23:15:44.868630  344703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182102.pem
	I0919 23:15:44.880061  344703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182102.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:15:44.896920  344703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:15:44.911404  344703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:44.916561  344703 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:44.916629  344703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:15:44.926128  344703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:15:44.943980  344703 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:15:44.948868  344703 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:15:44.948937  344703 kubeadm.go:392] StartCluster: {Name:enable-default-cni-896447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:enable-default-cni-896447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:15:44.949042  344703 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0919 23:15:44.949104  344703 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:15:45.001452  344703 cri.go:89] found id: ""
	I0919 23:15:45.001528  344703 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:15:45.015533  344703 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:15:45.027589  344703 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:15:45.027666  344703 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:15:45.039762  344703 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:15:45.039785  344703 kubeadm.go:157] found existing configuration files:
	
	I0919 23:15:45.039849  344703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:15:45.052324  344703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:15:45.052394  344703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:15:45.065237  344703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:15:45.077902  344703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:15:45.077964  344703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:15:45.090511  344703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:15:45.102541  344703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:15:45.102609  344703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:15:45.115026  344703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:15:45.127298  344703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:15:45.127351  344703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:15:45.138670  344703 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:15:45.208420  344703 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:15:45.286259  344703 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:15:45.472227  337922 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 23:15:45.472300  337922 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0919 23:15:45.476858  337922 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0919 23:15:45.476893  337922 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0919 23:15:45.506768  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 23:15:46.125044  337922 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:15:46.125137  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-896447 minikube.k8s.io/updated_at=2025_09_19T23_15_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=custom-flannel-896447 minikube.k8s.io/primary=true
	I0919 23:15:46.125211  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:46.137355  337922 ops.go:34] apiserver oom_adj: -16
	I0919 23:15:46.233475  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:46.734506  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:47.234420  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:47.734419  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:48.234401  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:48.734132  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:49.233737  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:49.734208  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:50.234052  337922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:15:50.311260  337922 kubeadm.go:1105] duration metric: took 4.186159034s to wait for elevateKubeSystemPrivileges
	I0919 23:15:50.311296  337922 kubeadm.go:394] duration metric: took 19.473567437s to StartCluster
	I0919 23:15:50.311318  337922 settings.go:142] acquiring lock: {Name:mkf4af4eab91076a115aed3b017088a6f5e76093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:50.311422  337922 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:15:50.312701  337922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14678/kubeconfig: {Name:mk5ed8b51261e712efaf73ae956ec07e6a42ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:15:50.312969  337922 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0919 23:15:50.312988  337922 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:15:50.313044  337922 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:15:50.313169  337922 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-896447"
	I0919 23:15:50.313175  337922 config.go:182] Loaded profile config "custom-flannel-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:15:50.313190  337922 addons.go:238] Setting addon storage-provisioner=true in "custom-flannel-896447"
	I0919 23:15:50.313223  337922 host.go:66] Checking if "custom-flannel-896447" exists ...
	I0919 23:15:50.313214  337922 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-896447"
	I0919 23:15:50.313249  337922 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-896447"
	I0919 23:15:50.313653  337922 cli_runner.go:164] Run: docker container inspect custom-flannel-896447 --format={{.State.Status}}
	I0919 23:15:50.313830  337922 cli_runner.go:164] Run: docker container inspect custom-flannel-896447 --format={{.State.Status}}
	I0919 23:15:50.315818  337922 out.go:179] * Verifying Kubernetes components...
	I0919 23:15:50.318504  337922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:15:50.344092  337922 addons.go:238] Setting addon default-storageclass=true in "custom-flannel-896447"
	I0919 23:15:50.344212  337922 host.go:66] Checking if "custom-flannel-896447" exists ...
	I0919 23:15:50.344800  337922 cli_runner.go:164] Run: docker container inspect custom-flannel-896447 --format={{.State.Status}}
	I0919 23:15:50.346489  337922 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0919 23:15:46.825866  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:15:48.826707  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:15:50.828329  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	I0919 23:15:50.348318  337922 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:15:50.348346  337922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:15:50.348413  337922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-896447
	I0919 23:15:50.374731  337922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/custom-flannel-896447/id_rsa Username:docker}
	I0919 23:15:50.382128  337922 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:15:50.382393  337922 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:15:50.382801  337922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-896447
	I0919 23:15:50.411174  337922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/custom-flannel-896447/id_rsa Username:docker}
	I0919 23:15:50.432116  337922 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:15:50.496285  337922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:15:50.568253  337922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:15:50.571983  337922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:15:50.716245  337922 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0919 23:15:50.717725  337922 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-896447" to be "Ready" ...
	I0919 23:15:50.727204  337922 node_ready.go:49] node "custom-flannel-896447" is "Ready"
	I0919 23:15:50.727250  337922 node_ready.go:38] duration metric: took 9.484224ms for node "custom-flannel-896447" to be "Ready" ...
	I0919 23:15:50.727268  337922 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:15:50.727428  337922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:15:51.012800  337922 api_server.go:72] duration metric: took 699.789515ms to wait for apiserver process to appear ...
	I0919 23:15:51.012836  337922 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:15:51.012864  337922 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0919 23:15:51.016264  337922 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0919 23:15:51.017923  337922 addons.go:514] duration metric: took 704.871462ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0919 23:15:51.021096  337922 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0919 23:15:51.022299  337922 api_server.go:141] control plane version: v1.34.0
	I0919 23:15:51.022326  337922 api_server.go:131] duration metric: took 9.483448ms to wait for apiserver health ...
	I0919 23:15:51.022335  337922 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:15:51.026144  337922 system_pods.go:59] 8 kube-system pods found
	I0919 23:15:51.026199  337922 system_pods.go:61] "coredns-66bc5c9577-l9sjz" [78579dbb-0cd3-4c5f-98d0-32ece641c810] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.026211  337922 system_pods.go:61] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.026224  337922 system_pods.go:61] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:51.026237  337922 system_pods.go:61] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:51.026248  337922 system_pods.go:61] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:15:51.026254  337922 system_pods.go:61] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:51.026260  337922 system_pods.go:61] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:51.026264  337922 system_pods.go:61] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Pending
	I0919 23:15:51.026308  337922 system_pods.go:74] duration metric: took 3.966887ms to wait for pod list to return data ...
	I0919 23:15:51.026323  337922 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:15:51.029356  337922 default_sa.go:45] found service account: "default"
	I0919 23:15:51.029382  337922 default_sa.go:55] duration metric: took 3.05277ms for default service account to be created ...
	I0919 23:15:51.029392  337922 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:15:51.032534  337922 system_pods.go:86] 8 kube-system pods found
	I0919 23:15:51.032570  337922 system_pods.go:89] "coredns-66bc5c9577-l9sjz" [78579dbb-0cd3-4c5f-98d0-32ece641c810] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.032581  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.032590  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:51.032601  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:51.032630  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:15:51.032646  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:51.032656  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:51.032667  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:15:51.032694  337922 retry.go:31] will retry after 235.77365ms: missing components: kube-dns, kube-proxy
	I0919 23:15:51.236585  337922 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-896447" context rescaled to 1 replicas
	I0919 23:15:51.273575  337922 system_pods.go:86] 8 kube-system pods found
	I0919 23:15:51.273617  337922 system_pods.go:89] "coredns-66bc5c9577-l9sjz" [78579dbb-0cd3-4c5f-98d0-32ece641c810] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.273626  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.273636  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:51.273647  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:51.273656  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:15:51.273688  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:51.273700  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:51.273716  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:15:51.273745  337922 retry.go:31] will retry after 343.041377ms: missing components: kube-dns, kube-proxy
	I0919 23:15:51.621514  337922 system_pods.go:86] 8 kube-system pods found
	I0919 23:15:51.621560  337922 system_pods.go:89] "coredns-66bc5c9577-l9sjz" [78579dbb-0cd3-4c5f-98d0-32ece641c810] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.621573  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.621582  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:51.621601  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:51.621613  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:15:51.621627  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:51.621635  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:51.621647  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:15:51.621666  337922 retry.go:31] will retry after 330.136086ms: missing components: kube-dns, kube-proxy
	I0919 23:15:51.956404  337922 system_pods.go:86] 8 kube-system pods found
	I0919 23:15:51.956464  337922 system_pods.go:89] "coredns-66bc5c9577-l9sjz" [78579dbb-0cd3-4c5f-98d0-32ece641c810] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.956472  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:51.956477  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:51.956486  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:51.956491  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:15:51.956496  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running
	I0919 23:15:51.956501  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:51.956506  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:15:51.956520  337922 retry.go:31] will retry after 392.437325ms: missing components: kube-dns
	I0919 23:15:52.354060  337922 system_pods.go:86] 8 kube-system pods found
	I0919 23:15:52.354093  337922 system_pods.go:89] "coredns-66bc5c9577-l9sjz" [78579dbb-0cd3-4c5f-98d0-32ece641c810] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:52.354101  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:52.354113  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:52.354121  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:52.354124  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:15:52.354129  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running
	I0919 23:15:52.354137  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:52.354142  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:15:52.354199  337922 retry.go:31] will retry after 536.53104ms: missing components: kube-dns
	I0919 23:15:52.895553  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:15:52.895582  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:52.895589  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:52.895597  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:52.895601  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:15:52.895607  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:52.895612  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:52.895616  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:15:52.895629  337922 retry.go:31] will retry after 923.672765ms: missing components: kube-dns
	I0919 23:15:53.823341  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:15:53.823382  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:53.823402  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:53.823415  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:15:53.823423  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:15:53.823435  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:53.823447  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:15:53.823455  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:15:53.823477  337922 retry.go:31] will retry after 1.077598414s: missing components: kube-dns
	W0919 23:15:53.326085  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:15:55.326690  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	I0919 23:15:53.993415  326932 system_pods.go:86] 9 kube-system pods found
	I0919 23:15:53.993450  326932 system_pods.go:89] "calico-kube-controllers-59556d9b4c-2fjg9" [8dc0b3b6-f557-4ad1-85ae-60ddc48950a7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0919 23:15:53.993462  326932 system_pods.go:89] "calico-node-r2l7z" [ebd611c4-409f-4cc4-8508-4c5892b758d8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0919 23:15:53.993472  326932 system_pods.go:89] "coredns-66bc5c9577-dldb4" [a59ee5bf-734d-4164-896a-639bb683ff7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:53.993477  326932 system_pods.go:89] "etcd-calico-896447" [41123509-0390-4020-808b-bce50ffb6c4f] Running
	I0919 23:15:53.993485  326932 system_pods.go:89] "kube-apiserver-calico-896447" [25b92dfb-471c-422d-998e-cde4285e1283] Running
	I0919 23:15:53.993491  326932 system_pods.go:89] "kube-controller-manager-calico-896447" [42b190c3-2f24-4dd1-8b90-58c440546dc7] Running
	I0919 23:15:53.993497  326932 system_pods.go:89] "kube-proxy-fwxkb" [634c6d7b-ea71-4998-b7ac-c792099dadd4] Running
	I0919 23:15:53.993503  326932 system_pods.go:89] "kube-scheduler-calico-896447" [e005a269-f2a5-41f2-b518-57da14a4830f] Running
	I0919 23:15:53.993508  326932 system_pods.go:89] "storage-provisioner" [0a6daeb2-d50a-4464-86cc-1f501ec95e29] Running
	I0919 23:15:53.993528  326932 retry.go:31] will retry after 11.0735947s: missing components: kube-dns
	I0919 23:15:54.906060  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:15:54.906098  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:54.906109  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:15:54.906117  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:15:54.906125  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:15:54.906132  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:54.906138  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:15:54.906144  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:15:54.906178  337922 retry.go:31] will retry after 1.283889614s: missing components: kube-dns
	I0919 23:15:56.194036  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:15:56.194067  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:56.194073  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:15:56.194080  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:15:56.194085  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:15:56.194090  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:56.194093  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:15:56.194098  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:15:56.194113  337922 retry.go:31] will retry after 1.121069777s: missing components: kube-dns
	I0919 23:15:57.319937  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:15:57.319972  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:57.319980  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:15:57.319988  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:15:57.319995  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:15:57.320002  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:57.320007  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:15:57.320013  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:15:57.320033  337922 retry.go:31] will retry after 1.960539688s: missing components: kube-dns
	I0919 23:15:59.285894  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:15:59.285929  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:15:59.285935  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:15:59.285942  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:15:59.285946  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:15:59.285951  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:15:59.285955  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:15:59.285959  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:15:59.285973  337922 retry.go:31] will retry after 2.809840366s: missing components: kube-dns
	W0919 23:15:57.327323  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:15:59.825005  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	I0919 23:16:02.100695  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:16:02.100735  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:02.100746  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:16:02.100753  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:16:02.100757  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:16:02.100762  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:16:02.100766  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:16:02.100770  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:16:02.100784  337922 retry.go:31] will retry after 3.200482563s: missing components: kube-dns
	W0919 23:16:01.825989  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:16:03.826331  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:16:05.826869  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	I0919 23:16:05.072876  326932 system_pods.go:86] 9 kube-system pods found
	I0919 23:16:05.072911  326932 system_pods.go:89] "calico-kube-controllers-59556d9b4c-2fjg9" [8dc0b3b6-f557-4ad1-85ae-60ddc48950a7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0919 23:16:05.072919  326932 system_pods.go:89] "calico-node-r2l7z" [ebd611c4-409f-4cc4-8508-4c5892b758d8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0919 23:16:05.072934  326932 system_pods.go:89] "coredns-66bc5c9577-dldb4" [a59ee5bf-734d-4164-896a-639bb683ff7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:05.072944  326932 system_pods.go:89] "etcd-calico-896447" [41123509-0390-4020-808b-bce50ffb6c4f] Running
	I0919 23:16:05.072951  326932 system_pods.go:89] "kube-apiserver-calico-896447" [25b92dfb-471c-422d-998e-cde4285e1283] Running
	I0919 23:16:05.072955  326932 system_pods.go:89] "kube-controller-manager-calico-896447" [42b190c3-2f24-4dd1-8b90-58c440546dc7] Running
	I0919 23:16:05.072961  326932 system_pods.go:89] "kube-proxy-fwxkb" [634c6d7b-ea71-4998-b7ac-c792099dadd4] Running
	I0919 23:16:05.072964  326932 system_pods.go:89] "kube-scheduler-calico-896447" [e005a269-f2a5-41f2-b518-57da14a4830f] Running
	I0919 23:16:05.072969  326932 system_pods.go:89] "storage-provisioner" [0a6daeb2-d50a-4464-86cc-1f501ec95e29] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:16:05.072988  326932 retry.go:31] will retry after 15.661468577s: missing components: kube-dns
	I0919 23:16:05.306364  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:16:05.306435  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:05.306445  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:16:05.306454  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:16:05.306458  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:16:05.306463  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:16:05.306468  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:16:05.306472  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:16:05.306486  337922 retry.go:31] will retry after 3.811447815s: missing components: kube-dns
	I0919 23:16:09.125696  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:16:09.125737  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:09.125747  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:16:09.125757  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:16:09.125763  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:16:09.125771  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:16:09.125777  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:16:09.125785  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:16:09.125806  337922 retry.go:31] will retry after 4.399926051s: missing components: kube-dns
	W0919 23:16:08.326079  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:16:10.826454  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	I0919 23:16:13.532009  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:16:13.532041  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:13.532047  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:16:13.532054  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:16:13.532059  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:16:13.532063  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:16:13.532068  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:16:13.532071  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:16:13.532085  337922 retry.go:31] will retry after 5.921906271s: missing components: kube-dns
	W0919 23:16:12.826970  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	W0919 23:16:15.325494  340598 pod_ready.go:104] pod "coredns-66bc5c9577-qj565" is not "Ready", error: <nil>
	I0919 23:16:16.325747  340598 pod_ready.go:94] pod "coredns-66bc5c9577-qj565" is "Ready"
	I0919 23:16:16.325773  340598 pod_ready.go:86] duration metric: took 36.506135595s for pod "coredns-66bc5c9577-qj565" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:16.328735  340598 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-149888" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:16.333309  340598 pod_ready.go:94] pod "etcd-default-k8s-diff-port-149888" is "Ready"
	I0919 23:16:16.333333  340598 pod_ready.go:86] duration metric: took 4.572083ms for pod "etcd-default-k8s-diff-port-149888" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:16.336000  340598 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-149888" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:16.340535  340598 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-149888" is "Ready"
	I0919 23:16:16.340558  340598 pod_ready.go:86] duration metric: took 4.532781ms for pod "kube-apiserver-default-k8s-diff-port-149888" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:16.342854  340598 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-149888" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:16.523474  340598 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-149888" is "Ready"
	I0919 23:16:16.523504  340598 pod_ready.go:86] duration metric: took 180.619849ms for pod "kube-controller-manager-default-k8s-diff-port-149888" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:16.723944  340598 pod_ready.go:83] waiting for pod "kube-proxy-txcms" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:17.123269  340598 pod_ready.go:94] pod "kube-proxy-txcms" is "Ready"
	I0919 23:16:17.123300  340598 pod_ready.go:86] duration metric: took 399.331369ms for pod "kube-proxy-txcms" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:17.324363  340598 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-149888" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:17.724691  340598 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-149888" is "Ready"
	I0919 23:16:17.724717  340598 pod_ready.go:86] duration metric: took 400.321939ms for pod "kube-scheduler-default-k8s-diff-port-149888" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:16:17.724728  340598 pod_ready.go:40] duration metric: took 37.913532643s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:16:17.774479  340598 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:16:17.777134  340598 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-149888" cluster and "default" namespace by default
	I0919 23:16:19.460410  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:16:19.460441  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:19.460447  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:16:19.460454  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:16:19.460459  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:16:19.460464  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:16:19.460468  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:16:19.460473  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:16:19.460487  337922 retry.go:31] will retry after 7.530517256s: missing components: kube-dns
	I0919 23:16:20.744277  326932 system_pods.go:86] 9 kube-system pods found
	I0919 23:16:20.744319  326932 system_pods.go:89] "calico-kube-controllers-59556d9b4c-2fjg9" [8dc0b3b6-f557-4ad1-85ae-60ddc48950a7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0919 23:16:20.744335  326932 system_pods.go:89] "calico-node-r2l7z" [ebd611c4-409f-4cc4-8508-4c5892b758d8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0919 23:16:20.744345  326932 system_pods.go:89] "coredns-66bc5c9577-dldb4" [a59ee5bf-734d-4164-896a-639bb683ff7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:20.744352  326932 system_pods.go:89] "etcd-calico-896447" [41123509-0390-4020-808b-bce50ffb6c4f] Running
	I0919 23:16:20.744360  326932 system_pods.go:89] "kube-apiserver-calico-896447" [25b92dfb-471c-422d-998e-cde4285e1283] Running
	I0919 23:16:20.744366  326932 system_pods.go:89] "kube-controller-manager-calico-896447" [42b190c3-2f24-4dd1-8b90-58c440546dc7] Running
	I0919 23:16:20.744374  326932 system_pods.go:89] "kube-proxy-fwxkb" [634c6d7b-ea71-4998-b7ac-c792099dadd4] Running
	I0919 23:16:20.744389  326932 system_pods.go:89] "kube-scheduler-calico-896447" [e005a269-f2a5-41f2-b518-57da14a4830f] Running
	I0919 23:16:20.744395  326932 system_pods.go:89] "storage-provisioner" [0a6daeb2-d50a-4464-86cc-1f501ec95e29] Running
	I0919 23:16:20.744421  326932 retry.go:31] will retry after 24.317497144s: missing components: kube-dns
	I0919 23:16:26.998293  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:16:26.998331  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:26.998339  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:16:26.998348  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:16:26.998354  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:16:26.998363  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:16:26.998368  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:16:26.998376  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:16:26.998395  337922 retry.go:31] will retry after 7.159590412s: missing components: kube-dns
	I0919 23:16:34.163924  337922 system_pods.go:86] 7 kube-system pods found
	I0919 23:16:34.163964  337922 system_pods.go:89] "coredns-66bc5c9577-w6tjl" [d431d8a1-5d4b-4063-9fe7-a7800daddff5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:16:34.163972  337922 system_pods.go:89] "etcd-custom-flannel-896447" [dd5dce3a-26cd-42fb-8f88-eed7ca1e4a74] Running
	I0919 23:16:34.163980  337922 system_pods.go:89] "kube-apiserver-custom-flannel-896447" [f9a925ce-01c9-499d-a697-2e50f393cc07] Running
	I0919 23:16:34.163986  337922 system_pods.go:89] "kube-controller-manager-custom-flannel-896447" [4dfa1c34-3227-442c-a9f9-077398085b53] Running
	I0919 23:16:34.163994  337922 system_pods.go:89] "kube-proxy-j5g8g" [e883d4e2-bb95-4b9f-8124-cc95201a6277] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:16:34.164000  337922 system_pods.go:89] "kube-scheduler-custom-flannel-896447" [f0ac73fa-d6f6-446d-bb07-db2442d62622] Running
	I0919 23:16:34.164009  337922 system_pods.go:89] "storage-provisioner" [40df5598-e004-4f78-835c-fae276aca87a] Running
	I0919 23:16:34.164028  337922 retry.go:31] will retry after 9.503575497s: missing components: kube-dns
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	f7b7a30f8e989       523cad1a4df73       10 seconds ago       Exited              dashboard-metrics-scraper   3                   7b6f3d7472a9b       dashboard-metrics-scraper-6ffb444bf9-nzjcs
	01f4b9ca69414       6e38f40d628db       14 seconds ago       Running             storage-provisioner         4                   4dc3c0061c9e9       storage-provisioner
	75639c6a69c7f       07655ddf2eebe       47 seconds ago       Running             kubernetes-dashboard        0                   4b84ae45ce6bc       kubernetes-dashboard-855c9754f9-tjkd6
	911420cee9a03       56cc512116c8f       57 seconds ago       Running             busybox                     1                   0f9d3dd2727c6       busybox
	d5c7e98006716       52546a367cc9e       57 seconds ago       Running             coredns                     1                   2c8fe2fa4d8eb       coredns-66bc5c9577-qj565
	50bbcbe6da8c0       6e38f40d628db       58 seconds ago       Exited              storage-provisioner         3                   4dc3c0061c9e9       storage-provisioner
	442a72e42dd57       df0860106674d       58 seconds ago       Running             kube-proxy                  4                   01742fa0ef8bc       kube-proxy-txcms
	c797e480e1280       409467f978b4a       58 seconds ago       Running             kindnet-cni                 1                   faaa0cbb65b2f       kindnet-4nqpl
	665ad2965128a       a0af72f2ec6d6       About a minute ago   Running             kube-controller-manager     1                   f8c9032e570d2       kube-controller-manager-default-k8s-diff-port-149888
	ad0c48b900b49       46169d968e920       About a minute ago   Running             kube-scheduler              1                   66c1819be8b99       kube-scheduler-default-k8s-diff-port-149888
	91c6fdc1fceb1       5f1f5298c888d       About a minute ago   Running             etcd                        1                   44fc528002938       etcd-default-k8s-diff-port-149888
	9024edac09e01       90550c43ad2bc       About a minute ago   Running             kube-apiserver              1                   e51828479176c       kube-apiserver-default-k8s-diff-port-149888
	296905fadad35       56cc512116c8f       About a minute ago   Exited              busybox                     0                   f4394928246fd       busybox
	8fe3c9f630050       52546a367cc9e       About a minute ago   Exited              coredns                     0                   40c79732ed9ad       coredns-66bc5c9577-qj565
	351f4368e8712       df0860106674d       2 minutes ago        Exited              kube-proxy                  3                   c885f7a6b94c4       kube-proxy-txcms
	fc26366126b18       409467f978b4a       2 minutes ago        Exited              kindnet-cni                 0                   2f402c2a337cb       kindnet-4nqpl
	bbfb1c954fb10       46169d968e920       3 minutes ago        Exited              kube-scheduler              0                   41315b7fcfdd6       kube-scheduler-default-k8s-diff-port-149888
	c43b276ad6480       5f1f5298c888d       3 minutes ago        Exited              etcd                        0                   bd64faadeff7f       etcd-default-k8s-diff-port-149888
	c2e3a7b89e470       a0af72f2ec6d6       3 minutes ago        Exited              kube-controller-manager     0                   6d6d2b50fb9ff       kube-controller-manager-default-k8s-diff-port-149888
	6cb08d2f210ed       90550c43ad2bc       3 minutes ago        Exited              kube-apiserver              0                   64d233f4794ef       kube-apiserver-default-k8s-diff-port-149888
	
	
	==> containerd <==
	Sep 19 23:16:26 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:26.877493325Z" level=info msg="StartContainer for \"f7b7a30f8e989f86d14d26eb1f419fddf6480eb4bf6bfa06559d6ce4730d46c0\""
	Sep 19 23:16:26 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:26.947924694Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to create inotify fd"
	Sep 19 23:16:26 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:26.950740908Z" level=info msg="StartContainer for \"f7b7a30f8e989f86d14d26eb1f419fddf6480eb4bf6bfa06559d6ce4730d46c0\" returns successfully"
	Sep 19 23:16:26 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:26.965006786Z" level=info msg="received exit event container_id:\"f7b7a30f8e989f86d14d26eb1f419fddf6480eb4bf6bfa06559d6ce4730d46c0\"  id:\"f7b7a30f8e989f86d14d26eb1f419fddf6480eb4bf6bfa06559d6ce4730d46c0\"  pid:2561  exit_status:1  exited_at:{seconds:1758323786  nanos:963941580}"
	Sep 19 23:16:27 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:27.003133936Z" level=info msg="shim disconnected" id=f7b7a30f8e989f86d14d26eb1f419fddf6480eb4bf6bfa06559d6ce4730d46c0 namespace=k8s.io
	Sep 19 23:16:27 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:27.003244373Z" level=warning msg="cleaning up after shim disconnected" id=f7b7a30f8e989f86d14d26eb1f419fddf6480eb4bf6bfa06559d6ce4730d46c0 namespace=k8s.io
	Sep 19 23:16:27 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:27.003284415Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 19 23:16:27 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:27.204965015Z" level=info msg="RemoveContainer for \"77be17c43496b56e40d07eda68898f1d52f0a4ab40c322c86030c94680335c0e\""
	Sep 19 23:16:27 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:27.214767229Z" level=info msg="RemoveContainer for \"77be17c43496b56e40d07eda68898f1d52f0a4ab40c322c86030c94680335c0e\" returns successfully"
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.035087579Z" level=info msg="StopPodSandbox for \"b36f8e8cae0db3ff438e95ed319b08410f36620ba95d32e847221ab05389d06c\""
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.035458371Z" level=info msg="TearDown network for sandbox \"b36f8e8cae0db3ff438e95ed319b08410f36620ba95d32e847221ab05389d06c\" successfully"
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.035493489Z" level=info msg="StopPodSandbox for \"b36f8e8cae0db3ff438e95ed319b08410f36620ba95d32e847221ab05389d06c\" returns successfully"
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.040113206Z" level=info msg="RemovePodSandbox for \"b36f8e8cae0db3ff438e95ed319b08410f36620ba95d32e847221ab05389d06c\""
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.040185433Z" level=info msg="Forcibly stopping sandbox \"b36f8e8cae0db3ff438e95ed319b08410f36620ba95d32e847221ab05389d06c\""
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.040302623Z" level=info msg="TearDown network for sandbox \"b36f8e8cae0db3ff438e95ed319b08410f36620ba95d32e847221ab05389d06c\" successfully"
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.048754867Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b36f8e8cae0db3ff438e95ed319b08410f36620ba95d32e847221ab05389d06c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.048871381Z" level=info msg="RemovePodSandbox \"b36f8e8cae0db3ff438e95ed319b08410f36620ba95d32e847221ab05389d06c\" returns successfully"
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.050147942Z" level=info msg="StopPodSandbox for \"448e3f788d1c7c5b4f0a528f94420299c25c19e163f4dbc584f88a490e67f8c8\""
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.050298152Z" level=info msg="TearDown network for sandbox \"448e3f788d1c7c5b4f0a528f94420299c25c19e163f4dbc584f88a490e67f8c8\" successfully"
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.050317485Z" level=info msg="StopPodSandbox for \"448e3f788d1c7c5b4f0a528f94420299c25c19e163f4dbc584f88a490e67f8c8\" returns successfully"
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.050885429Z" level=info msg="RemovePodSandbox for \"448e3f788d1c7c5b4f0a528f94420299c25c19e163f4dbc584f88a490e67f8c8\""
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.050919184Z" level=info msg="Forcibly stopping sandbox \"448e3f788d1c7c5b4f0a528f94420299c25c19e163f4dbc584f88a490e67f8c8\""
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.051269282Z" level=info msg="TearDown network for sandbox \"448e3f788d1c7c5b4f0a528f94420299c25c19e163f4dbc584f88a490e67f8c8\" successfully"
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.058870199Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"448e3f788d1c7c5b4f0a528f94420299c25c19e163f4dbc584f88a490e67f8c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 19 23:16:32 default-k8s-diff-port-149888 containerd[477]: time="2025-09-19T23:16:32.058996037Z" level=info msg="RemovePodSandbox \"448e3f788d1c7c5b4f0a528f94420299c25c19e163f4dbc584f88a490e67f8c8\" returns successfully"
	
	
	==> coredns [8fe3c9f630050a4562bdee872e3bd5b158ebb872b819c9704b64439e00342d40] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37991 - 35562 "HINFO IN 9064399029636666911.5004093786555544926. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.059647854s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d5c7e980067162f7d7cdd11137f7223d6574824e17220f7c25d8e3708be42a76] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45959 - 29957 "HINFO IN 3487927117264530873.7781742703901399107. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.068781999s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-149888
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-149888
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=default-k8s-diff-port-149888
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_13_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:13:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-149888
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:16:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:16:06 +0000   Fri, 19 Sep 2025 23:13:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:16:06 +0000   Fri, 19 Sep 2025 23:13:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:16:06 +0000   Fri, 19 Sep 2025 23:13:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:16:06 +0000   Fri, 19 Sep 2025 23:13:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-149888
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 e74d6dfd16154bd0b4ac1ae2d5aaa930
	  System UUID:                48f7b01c-0e5a-4c51-b5e5-65660304d365
	  Boot ID:                    760555a9-6fca-43eb-a2c3-c2de6bc00e61
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-qj565                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m59s
	  kube-system                 etcd-default-k8s-diff-port-149888                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m8s
	  kube-system                 kindnet-4nqpl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m59s
	  kube-system                 kube-apiserver-default-k8s-diff-port-149888             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m8s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-149888    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m4s
	  kube-system                 kube-proxy-txcms                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  kube-system                 kube-scheduler-default-k8s-diff-port-149888             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m4s
	  kube-system                 metrics-server-746fcd58dc-hskrc                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         85s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nzjcs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tjkd6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 57s                kube-proxy       
	  Normal  Starting                 2m16s              kube-proxy       
	  Normal  Starting                 3m4s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m4s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m4s               kubelet          Node default-k8s-diff-port-149888 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s               kubelet          Node default-k8s-diff-port-149888 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s               kubelet          Node default-k8s-diff-port-149888 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m                 node-controller  Node default-k8s-diff-port-149888 event: Registered Node default-k8s-diff-port-149888 in Controller
	  Normal  NodeHasNoDiskPressure    65s (x7 over 65s)  kubelet          Node default-k8s-diff-port-149888 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  65s (x9 over 65s)  kubelet          Node default-k8s-diff-port-149888 status is now: NodeHasSufficientMemory
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     65s (x7 over 65s)  kubelet          Node default-k8s-diff-port-149888 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  65s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           56s                node-controller  Node default-k8s-diff-port-149888 event: Registered Node default-k8s-diff-port-149888 in Controller
	  Normal  Starting                 5s                 kubelet          Starting kubelet.
	  Normal  Starting                 5s                 kubelet          Starting kubelet.
	  Normal  Starting                 4s                 kubelet          Starting kubelet.
	  Normal  Starting                 3s                 kubelet          Starting kubelet.
	  Normal  Starting                 2s                 kubelet          Starting kubelet.
	  Normal  Starting                 2s                 kubelet          Starting kubelet.
	  Normal  Starting                 1s                 kubelet          Starting kubelet.
	
	
	==> dmesg <==
	[  +0.995983] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.504855] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500830] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.994952] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.505945] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501588] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.993278] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.507907] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500995] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.992251] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.509112] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501500] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989799] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.510643] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501653] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989383] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.511830] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500482] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.989056] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +0.512088] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.500241] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501649] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501291] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[  +1.501422] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	[Sep19 23:13] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth7798ce5e
	
	
	==> etcd [91c6fdc1fceb1c8c70caa847aea8dfc0a97f915a36dc2754d386368a179f0728] <==
	{"level":"warn","ts":"2025-09-19T23:15:37.740607Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.577396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/default-k8s-diff-port-149888.1866d21e77b76b1e\" limit:1 ","response":"range_response_count:1 size:793"}
	{"level":"info","ts":"2025-09-19T23:15:37.740643Z","caller":"traceutil/trace.go:172","msg":"trace[598269319] range","detail":"{range_begin:/registry/events/default/default-k8s-diff-port-149888.1866d21e77b76b1e; range_end:; response_count:1; response_revision:562; }","duration":"107.626007ms","start":"2025-09-19T23:15:37.633005Z","end":"2025-09-19T23:15:37.740631Z","steps":["trace[598269319] 'agreement among raft nodes before linearized reading'  (duration: 107.504715ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:15:37.930915Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.510485ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-746fcd58dc-hskrc\" limit:1 ","response":"range_response_count:1 size:4384"}
	{"level":"info","ts":"2025-09-19T23:15:37.930985Z","caller":"traceutil/trace.go:172","msg":"trace[501430462] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-746fcd58dc-hskrc; range_end:; response_count:1; response_revision:564; }","duration":"149.59428ms","start":"2025-09-19T23:15:37.781372Z","end":"2025-09-19T23:15:37.930967Z","steps":["trace[501430462] 'agreement among raft nodes before linearized reading'  (duration: 59.090763ms)","trace[501430462] 'range keys from in-memory index tree'  (duration: 90.310594ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:15:37.931009Z","caller":"traceutil/trace.go:172","msg":"trace[1314534809] transaction","detail":"{read_only:false; response_revision:565; number_of_response:1; }","duration":"150.096324ms","start":"2025-09-19T23:15:37.780898Z","end":"2025-09-19T23:15:37.930994Z","steps":["trace[1314534809] 'process raft request'  (duration: 59.608146ms)","trace[1314534809] 'compare'  (duration: 90.28644ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:15:37.931083Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.463579ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" limit:1 ","response":"range_response_count:1 size:992"}
	{"level":"info","ts":"2025-09-19T23:15:37.931137Z","caller":"traceutil/trace.go:172","msg":"trace[2002339098] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:1; response_revision:565; }","duration":"149.524374ms","start":"2025-09-19T23:15:37.781599Z","end":"2025-09-19T23:15:37.931123Z","steps":["trace[2002339098] 'agreement among raft nodes before linearized reading'  (duration: 149.378912ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:15:37.931274Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.984776ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:node-problem-detector\" limit:1 ","response":"range_response_count:1 size:655"}
	{"level":"info","ts":"2025-09-19T23:15:37.931314Z","caller":"traceutil/trace.go:172","msg":"trace[1441136243] range","detail":"{range_begin:/registry/clusterroles/system:node-problem-detector; range_end:; response_count:1; response_revision:565; }","duration":"148.030013ms","start":"2025-09-19T23:15:37.783274Z","end":"2025-09-19T23:15:37.931304Z","steps":["trace[1441136243] 'agreement among raft nodes before linearized reading'  (duration: 147.913859ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:15:38.346045Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.796155ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T23:15:38.346120Z","caller":"traceutil/trace.go:172","msg":"trace[437704354] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:571; }","duration":"156.891144ms","start":"2025-09-19T23:15:38.189216Z","end":"2025-09-19T23:15:38.346107Z","steps":["trace[437704354] 'agreement among raft nodes before linearized reading'  (duration: 91.19849ms)","trace[437704354] 'range keys from in-memory index tree'  (duration: 65.556724ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:15:38.346412Z","caller":"traceutil/trace.go:172","msg":"trace[1617977273] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"206.929494ms","start":"2025-09-19T23:15:38.139458Z","end":"2025-09-19T23:15:38.346388Z","steps":["trace[1617977273] 'process raft request'  (duration: 140.735783ms)","trace[1617977273] 'compare'  (duration: 65.877219ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:15:38.346450Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.646237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:certificates.k8s.io:kube-apiserver-client-approver\" limit:1 ","response":"range_response_count:1 size:700"}
	{"level":"info","ts":"2025-09-19T23:15:38.346496Z","caller":"traceutil/trace.go:172","msg":"trace[2088407094] range","detail":"{range_begin:/registry/clusterroles/system:certificates.k8s.io:kube-apiserver-client-approver; range_end:; response_count:1; response_revision:572; }","duration":"124.706762ms","start":"2025-09-19T23:15:38.221777Z","end":"2025-09-19T23:15:38.346484Z","steps":["trace[2088407094] 'agreement among raft nodes before linearized reading'  (duration: 124.556854ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:15:38.346514Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.991006ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/default-k8s-diff-port-149888.1866d21e77b76b1e\" limit:1 ","response":"range_response_count:1 size:793"}
	{"level":"info","ts":"2025-09-19T23:15:38.346545Z","caller":"traceutil/trace.go:172","msg":"trace[702841334] range","detail":"{range_begin:/registry/events/default/default-k8s-diff-port-149888.1866d21e77b76b1e; range_end:; response_count:1; response_revision:572; }","duration":"124.029853ms","start":"2025-09-19T23:15:38.222508Z","end":"2025-09-19T23:15:38.346537Z","steps":["trace[702841334] 'agreement among raft nodes before linearized reading'  (duration: 123.892254ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:15:38.472627Z","caller":"traceutil/trace.go:172","msg":"trace[1485426770] linearizableReadLoop","detail":"{readStateIndex:619; appliedIndex:619; }","duration":"123.952756ms","start":"2025-09-19T23:15:38.348647Z","end":"2025-09-19T23:15:38.472600Z","steps":["trace[1485426770] 'read index received'  (duration: 123.94351ms)","trace[1485426770] 'applied index is now lower than readState.Index'  (duration: 7.5µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:15:38.621407Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"272.734927ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver\" limit:1 ","response":"range_response_count:1 size:724"}
	{"level":"info","ts":"2025-09-19T23:15:38.621559Z","caller":"traceutil/trace.go:172","msg":"trace[655080171] range","detail":"{range_begin:/registry/clusterroles/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver; range_end:; response_count:1; response_revision:572; }","duration":"272.870407ms","start":"2025-09-19T23:15:38.348633Z","end":"2025-09-19T23:15:38.621504Z","steps":["trace[655080171] 'agreement among raft nodes before linearized reading'  (duration: 124.064118ms)","trace[655080171] 'range keys from in-memory index tree'  (duration: 148.550822ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:15:38.622106Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.864791ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873788782990676015 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-149888.1866d21e77b76b1e\" mod_revision:568 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-149888.1866d21e77b76b1e\" value_size:690 lease:4650416746135900089 >> failure:<request_range:<key:\"/registry/events/default/default-k8s-diff-port-149888.1866d21e77b76b1e\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:15:38.622233Z","caller":"traceutil/trace.go:172","msg":"trace[1478913785] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"273.906277ms","start":"2025-09-19T23:15:38.348310Z","end":"2025-09-19T23:15:38.622216Z","steps":["trace[1478913785] 'process raft request'  (duration: 124.374288ms)","trace[1478913785] 'compare'  (duration: 148.747187ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:15:38.818947Z","caller":"traceutil/trace.go:172","msg":"trace[1836003029] linearizableReadLoop","detail":"{readStateIndex:623; appliedIndex:623; }","duration":"114.435179ms","start":"2025-09-19T23:15:38.704489Z","end":"2025-09-19T23:15:38.818924Z","steps":["trace[1836003029] 'read index received'  (duration: 114.426566ms)","trace[1836003029] 'applied index is now lower than readState.Index'  (duration: 7.53µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:15:38.895390Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.878196ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:disruption-controller\" limit:1 ","response":"range_response_count:1 size:972"}
	{"level":"info","ts":"2025-09-19T23:15:38.895497Z","caller":"traceutil/trace.go:172","msg":"trace[1235492677] range","detail":"{range_begin:/registry/clusterroles/system:controller:disruption-controller; range_end:; response_count:1; response_revision:576; }","duration":"190.973578ms","start":"2025-09-19T23:15:38.704478Z","end":"2025-09-19T23:15:38.895452Z","steps":["trace[1235492677] 'agreement among raft nodes before linearized reading'  (duration: 114.530582ms)","trace[1235492677] 'range keys from in-memory index tree'  (duration: 76.238031ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:15:38.895506Z","caller":"traceutil/trace.go:172","msg":"trace[1349552410] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"193.145386ms","start":"2025-09-19T23:15:38.702350Z","end":"2025-09-19T23:15:38.895495Z","steps":["trace[1349552410] 'process raft request'  (duration: 116.616939ms)","trace[1349552410] 'compare'  (duration: 76.384046ms)"],"step_count":2}
	
	
	==> etcd [c43b276ad64808b3638f48fb95a466e4ac5a6ca6b0f2e698462337fbab846497] <==
	{"level":"warn","ts":"2025-09-19T23:13:27.815430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:13:27.886348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58058","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T23:13:30.004284Z","caller":"traceutil/trace.go:172","msg":"trace[232577443] transaction","detail":"{read_only:false; response_revision:134; number_of_response:1; }","duration":"146.808563ms","start":"2025-09-19T23:13:29.857449Z","end":"2025-09-19T23:13:30.004258Z","steps":["trace[232577443] 'process raft request'  (duration: 58.03485ms)","trace[232577443] 'compare'  (duration: 88.59504ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:13:30.232638Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.408229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:validatingadmissionpolicy-status-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T23:13:30.232736Z","caller":"traceutil/trace.go:172","msg":"trace[780047360] range","detail":"{range_begin:/registry/clusterroles/system:controller:validatingadmissionpolicy-status-controller; range_end:; response_count:0; response_revision:136; }","duration":"126.565927ms","start":"2025-09-19T23:13:30.106148Z","end":"2025-09-19T23:13:30.232714Z","steps":["trace[780047360] 'range keys from in-memory index tree'  (duration: 126.282286ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:13:30.649310Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.435828ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T23:13:30.649390Z","caller":"traceutil/trace.go:172","msg":"trace[1505618848] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:143; }","duration":"156.543726ms","start":"2025-09-19T23:13:30.492829Z","end":"2025-09-19T23:13:30.649373Z","steps":["trace[1505618848] 'agreement among raft nodes before linearized reading'  (duration: 78.483421ms)","trace[1505618848] 'range keys from in-memory index tree'  (duration: 77.867014ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:13:30.649472Z","caller":"traceutil/trace.go:172","msg":"trace[374656585] transaction","detail":"{read_only:false; response_revision:144; number_of_response:1; }","duration":"264.988542ms","start":"2025-09-19T23:13:30.384417Z","end":"2025-09-19T23:13:30.649405Z","steps":["trace[374656585] 'process raft request'  (duration: 187.031893ms)","trace[374656585] 'compare'  (duration: 77.744988ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:13:30.912700Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.219233ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873788782958063734 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:node-proxier\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:node-proxier\" value_size:627 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:13:30.913259Z","caller":"traceutil/trace.go:172","msg":"trace[335778829] transaction","detail":"{read_only:false; response_revision:145; number_of_response:1; }","duration":"258.990155ms","start":"2025-09-19T23:13:30.654137Z","end":"2025-09-19T23:13:30.913128Z","steps":["trace[335778829] 'process raft request'  (duration: 128.437082ms)","trace[335778829] 'compare'  (duration: 129.104569ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:13:31.169920Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.012706ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873788782958063736 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:kube-controller-manager\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:kube-controller-manager\" value_size:662 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:13:31.170019Z","caller":"traceutil/trace.go:172","msg":"trace[651860733] transaction","detail":"{read_only:false; response_revision:146; number_of_response:1; }","duration":"251.381159ms","start":"2025-09-19T23:13:30.918620Z","end":"2025-09-19T23:13:31.170001Z","steps":["trace[651860733] 'process raft request'  (duration: 122.215941ms)","trace[651860733] 'compare'  (duration: 128.853571ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:13:31.425478Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.564266ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873788782958063738 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:kube-dns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:kube-dns\" value_size:606 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:13:31.425551Z","caller":"traceutil/trace.go:172","msg":"trace[1663603764] transaction","detail":"{read_only:false; response_revision:147; number_of_response:1; }","duration":"249.884129ms","start":"2025-09-19T23:13:31.175657Z","end":"2025-09-19T23:13:31.425541Z","steps":["trace[1663603764] 'process raft request'  (duration: 121.193789ms)","trace[1663603764] 'compare'  (duration: 128.43632ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:13:31.617572Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.73745ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T23:13:31.617660Z","caller":"traceutil/trace.go:172","msg":"trace[747472130] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:147; }","duration":"124.839931ms","start":"2025-09-19T23:13:31.492800Z","end":"2025-09-19T23:13:31.617640Z","steps":["trace[747472130] 'agreement among raft nodes before linearized reading'  (duration: 61.935669ms)","trace[747472130] 'range keys from in-memory index tree'  (duration: 62.765629ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:13:31.617739Z","caller":"traceutil/trace.go:172","msg":"trace[77664982] transaction","detail":"{read_only:false; response_revision:148; number_of_response:1; }","duration":"187.311583ms","start":"2025-09-19T23:13:31.430403Z","end":"2025-09-19T23:13:31.617714Z","steps":["trace[77664982] 'process raft request'  (duration: 124.392ms)","trace[77664982] 'compare'  (duration: 62.693129ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:13:31.788775Z","caller":"traceutil/trace.go:172","msg":"trace[858407443] transaction","detail":"{read_only:false; response_revision:149; number_of_response:1; }","duration":"166.63206ms","start":"2025-09-19T23:13:31.622117Z","end":"2025-09-19T23:13:31.788749Z","steps":["trace[858407443] 'process raft request'  (duration: 96.994615ms)","trace[858407443] 'compare'  (duration: 69.496757ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:14:11.487036Z","caller":"traceutil/trace.go:172","msg":"trace[396052314] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"157.199919ms","start":"2025-09-19T23:14:11.329811Z","end":"2025-09-19T23:14:11.487010Z","steps":["trace[396052314] 'process raft request'  (duration: 92.008901ms)","trace[396052314] 'compare'  (duration: 65.059975ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:14:11.810827Z","caller":"traceutil/trace.go:172","msg":"trace[754889133] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"122.202087ms","start":"2025-09-19T23:14:11.688602Z","end":"2025-09-19T23:14:11.810804Z","steps":["trace[754889133] 'process raft request'  (duration: 121.963074ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:14:37.609149Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.078707ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873788782958064630 > lease_revoke:<id:408999644103b3a8>","response":"size:28"}
	{"level":"info","ts":"2025-09-19T23:15:05.541640Z","caller":"traceutil/trace.go:172","msg":"trace[1056613088] linearizableReadLoop","detail":"{readStateIndex:520; appliedIndex:520; }","duration":"170.332931ms","start":"2025-09-19T23:15:05.371270Z","end":"2025-09-19T23:15:05.541603Z","steps":["trace[1056613088] 'read index received'  (duration: 170.317954ms)","trace[1056613088] 'applied index is now lower than readState.Index'  (duration: 13.169µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:15:05.541774Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.485971ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T23:15:05.541896Z","caller":"traceutil/trace.go:172","msg":"trace[1110311917] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:486; }","duration":"170.626639ms","start":"2025-09-19T23:15:05.371256Z","end":"2025-09-19T23:15:05.541883Z","steps":["trace[1110311917] 'agreement among raft nodes before linearized reading'  (duration: 170.410354ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:15:05.541808Z","caller":"traceutil/trace.go:172","msg":"trace[805537127] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"189.143186ms","start":"2025-09-19T23:15:05.352650Z","end":"2025-09-19T23:15:05.541793Z","steps":["trace[805537127] 'process raft request'  (duration: 188.991726ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:16:37 up  1:59,  0 users,  load average: 4.26, 4.11, 2.74
	Linux default-k8s-diff-port-149888 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [c797e480e1280f5c22736b8a5bf38e2534ffb233c66eb782ab3f678000ec15e1] <==
	I0919 23:15:39.566984       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0919 23:15:39.567351       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0919 23:15:39.568610       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:15:39.568636       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:15:39.568672       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:15:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:15:39.865685       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:15:39.896176       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:15:39.896517       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:15:39.897002       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0919 23:16:09.803825       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0919 23:16:09.803822       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0919 23:16:09.898679       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0919 23:16:09.898682       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0919 23:16:11.397222       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:16:11.397266       1 metrics.go:72] Registering metrics
	I0919 23:16:11.397388       1 controller.go:711] "Syncing nftables rules"
	I0919 23:16:19.805324       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:16:19.805386       1 main.go:301] handling current node
	I0919 23:16:29.803280       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:16:29.803356       1 main.go:301] handling current node
	
	
	==> kindnet [fc26366126b18bc013992c759f1ace9b13c7b3a4d0bf6ba034cf10b8bc295925] <==
	I0919 23:13:39.382290       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:13:39.382312       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:13:39.382344       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:13:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:13:39.613444       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:13:39.613509       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:13:39.613524       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:13:39.614133       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0919 23:14:09.614611       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0919 23:14:09.614611       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0919 23:14:09.614615       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0919 23:14:09.614759       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0919 23:14:40.793926       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0919 23:14:40.930995       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0919 23:14:40.995735       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0919 23:14:41.169090       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0919 23:14:43.513784       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:14:43.513821       1 metrics.go:72] Registering metrics
	I0919 23:14:43.513943       1 controller.go:711] "Syncing nftables rules"
	I0919 23:14:49.616859       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:14:49.616901       1 main.go:301] handling current node
	I0919 23:14:59.613111       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:14:59.613150       1 main.go:301] handling current node
	I0919 23:15:09.616806       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:15:09.616843       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6cb08d2f210eda6eb6b104b96ac64e816b7fab2dd877c455b3d32f16fa032f13] <==
	I0919 23:13:38.294791       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 23:13:38.301329       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 23:13:38.394148       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 23:14:30.971448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 23:14:44.541836       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 23:15:11.493563       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:38046: use of closed network connection
	I0919 23:15:12.320762       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 23:15:12.332564       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:15:12.332659       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0919 23:15:12.332728       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0919 23:15:12.432751       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.111.57.168"}
	W0919 23:15:12.443016       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:15:12.443084       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0919 23:15:12.445732       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	W0919 23:15:12.450947       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:15:12.451007       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [9024edac09e016d0476e31f1755919ddf4504e371e70d553d6b26c853be5cb3a] <==
	I0919 23:15:37.130260       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 23:15:37.182127       1 handler_proxy.go:99] no RequestInfo found in the context
	W0919 23:15:37.182168       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:15:37.182206       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 23:15:37.182225       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0919 23:15:37.182251       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:15:37.183393       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 23:15:37.261961       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.239.169"}
	I0919 23:15:37.741018       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.148.91"}
	I0919 23:15:41.563536       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 23:15:41.908460       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 23:15:41.958633       1 controller.go:667] quota admission added evaluator for: endpoints
	I0919 23:15:41.958634       1 controller.go:667] quota admission added evaluator for: endpoints
	W0919 23:16:37.183227       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:16:37.183294       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 23:16:37.183321       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 23:16:37.184314       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:16:37.184417       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:16:37.184433       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [665ad2965128a1aa390f367ccbca624d01ee6bee89aa4b03acffe494908e88b8] <==
	I0919 23:15:41.555813       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 23:15:41.556141       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 23:15:41.560856       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 23:15:41.561054       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0919 23:15:41.561266       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0919 23:15:41.561386       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0919 23:15:41.561401       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0919 23:15:41.561409       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0919 23:15:41.565423       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 23:15:41.565868       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:15:41.565884       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 23:15:41.565892       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 23:15:41.570304       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 23:15:41.573437       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 23:15:41.573516       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:15:41.578117       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 23:15:41.578336       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 23:15:41.582467       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0919 23:15:41.582566       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 23:15:41.587207       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 23:15:41.589527       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 23:15:41.590800       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0919 23:15:41.597148       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0919 23:16:11.578829       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:16:11.605838       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [c2e3a7b89e4703676da0d2bd9bc89da04f199a71876c7e42f6ed8afbc9fd9473] <==
	I0919 23:13:37.389745       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 23:13:37.389794       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 23:13:37.390263       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 23:13:37.390285       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 23:13:37.390318       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 23:13:37.390534       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 23:13:37.390632       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0919 23:13:37.390793       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 23:13:37.390811       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 23:13:37.392050       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 23:13:37.393339       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 23:13:37.394470       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0919 23:13:37.394614       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 23:13:37.394622       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0919 23:13:37.394681       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0919 23:13:37.394693       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0919 23:13:37.394700       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0919 23:13:37.404865       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:13:37.407204       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-149888" podCIDRs=["10.244.0.0/24"]
	I0919 23:13:37.410946       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0919 23:13:37.421263       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 23:13:37.431590       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:13:37.439955       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:13:37.439977       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 23:13:37.439984       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [351f4368e8712652bd68f0bd0ebb515c4f49fef1d60d7f5a8189bd9bb301dfa1] <==
	I0919 23:14:20.435528       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:14:20.506456       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:14:20.607146       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:14:20.607208       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0919 23:14:20.607333       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:14:20.637634       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:14:20.637768       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:14:20.645510       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:14:20.646061       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:14:20.646085       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:14:20.647662       1 config.go:200] "Starting service config controller"
	I0919 23:14:20.647686       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:14:20.647708       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:14:20.647722       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:14:20.647738       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:14:20.647743       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:14:20.647764       1 config.go:309] "Starting node config controller"
	I0919 23:14:20.647769       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:14:20.748316       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:14:20.748357       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 23:14:20.748372       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 23:14:20.748391       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [442a72e42dd57d27df7f19e48129f29be808a95cf0062d2de0da9deebbf13a6b] <==
	I0919 23:15:39.465183       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:15:39.554243       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:15:39.655063       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:15:39.655136       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0919 23:15:39.655412       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:15:39.733481       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:15:39.733552       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:15:39.742350       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:15:39.742787       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:15:39.742822       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:15:39.745983       1 config.go:200] "Starting service config controller"
	I0919 23:15:39.746353       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:15:39.749378       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:15:39.746370       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:15:39.749834       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:15:39.748195       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:15:39.746909       1 config.go:309] "Starting node config controller"
	I0919 23:15:39.757752       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:15:39.757914       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:15:39.850132       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 23:15:39.857795       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 23:15:39.857820       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ad0c48b900b49e90cdbef611d4a6547e0ed3c32d04d88e902443a2aa626145e0] <==
	I0919 23:15:34.612824       1 serving.go:386] Generated self-signed cert in-memory
	I0919 23:15:36.201174       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 23:15:36.201216       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:15:36.208947       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0919 23:15:36.209068       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0919 23:15:36.209219       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 23:15:36.209250       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 23:15:36.209281       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:15:36.209290       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:15:36.209457       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 23:15:36.209539       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 23:15:36.309293       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0919 23:15:36.309342       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 23:15:36.309386       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [bbfb1c954fb1034180e24edeaa8f8df98c52266fc3bff9938f32230a087e7bf7] <==
	E0919 23:13:28.451271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 23:13:29.268227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 23:13:29.277974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 23:13:29.331758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0919 23:13:29.434023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 23:13:29.509447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 23:13:29.635330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 23:13:29.637354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 23:13:29.652742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 23:13:29.682825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 23:13:29.694381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 23:13:29.696814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 23:13:29.697933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 23:13:29.840967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 23:13:29.890173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 23:13:29.901540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 23:13:29.915344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 23:13:29.938268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 23:13:29.983087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 23:13:29.998143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 23:13:30.956189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0919 23:13:31.186936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 23:13:31.354332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 23:13:31.602296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I0919 23:13:32.444246       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: E0919 23:16:37.661600    3688 file_linux.go:61] "Unable to read config path" err="unable to create inotify: too many open files" path="/etc/kubernetes/manifests"
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.662789    3688 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.7.27" apiVersion="v1"
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.663418    3688 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.663463    3688 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: E0919 23:16:37.663528    3688 plugins.go:580] "Error initializing dynamic plugin prober" err="error initializing watcher: too many open files"
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.664355    3688 server.go:1262] "Started kubelet"
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.664533    3688 server.go:180] "Starting to listen" address="0.0.0.0" port=10250
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.664904    3688 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.665025    3688 server_v1.go:49] "podresources" method="list" useActivePods=true
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.665278    3688 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.666074    3688 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.666603    3688 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: E0919 23:16:37.666660    3688 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.671695    3688 server.go:310] "Adding debug handlers to kubelet server"
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.672298    3688 volume_manager.go:313] "Starting Kubelet Volume Manager"
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.672576    3688 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: E0919 23:16:37.672583    3688 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"default-k8s-diff-port-149888\" not found"
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.674075    3688 reconciler.go:29] "Reconciler: start to sync state"
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.682537    3688 factory.go:223] Registration of the systemd container factory successfully
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.682697    3688 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: I0919 23:16:37.685950    3688 factory.go:223] Registration of the containerd container factory successfully
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: E0919 23:16:37.685992    3688 manager.go:294] Registration of the raw container factory failed: inotify_init: too many open files
	Sep 19 23:16:37 default-k8s-diff-port-149888 kubelet[3688]: E0919 23:16:37.686015    3688 kubelet.go:1686] "Failed to start cAdvisor" err="inotify_init: too many open files"
	Sep 19 23:16:37 default-k8s-diff-port-149888 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Sep 19 23:16:37 default-k8s-diff-port-149888 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	
	==> kubernetes-dashboard [75639c6a69c7fb7b2b9402fbd69fec246c57cfd5a262d9cd90c13979bd1c85c0] <==
	2025/09/19 23:15:50 Using namespace: kubernetes-dashboard
	2025/09/19 23:15:50 Using in-cluster config to connect to apiserver
	2025/09/19 23:15:50 Using secret token for csrf signing
	2025/09/19 23:15:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 23:15:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/19 23:15:50 Successful initial request to the apiserver, version: v1.34.0
	2025/09/19 23:15:50 Generating JWE encryption key
	2025/09/19 23:15:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/19 23:15:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/19 23:15:50 Initializing JWE encryption key from synchronized object
	2025/09/19 23:15:50 Creating in-cluster Sidecar client
	2025/09/19 23:15:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:15:50 Serving insecurely on HTTP port: 9090
	2025/09/19 23:16:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:15:50 Starting overwatch
	
	
	==> storage-provisioner [01f4b9ca69414790581ceaaa1616802fe23fcfbd5472536dee2ae97165537533] <==
	I0919 23:16:22.956609       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 23:16:22.966937       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 23:16:22.966995       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0919 23:16:22.969947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:16:26.425401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:16:31.690007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:16:35.289022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [50bbcbe6da8c015a54149eff64a6f8dfce18bf32136ab051fee00f8082de50cb] <==
	I0919 23:15:39.320795       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 23:16:09.325313       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888: exit status 2 (378.958353ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-149888 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-hskrc
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-149888 describe pod metrics-server-746fcd58dc-hskrc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-149888 describe pod metrics-server-746fcd58dc-hskrc: exit status 1 (101.60353ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-hskrc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-149888 describe pod metrics-server-746fcd58dc-hskrc: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (9.36s)
E0919 23:18:10.067860   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/no-preload-364197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (290/329)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 13.88
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 12.11
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.21
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.17
21 TestBinaryMirror 0.84
22 TestOffline 73.32
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 155.32
29 TestAddons/serial/Volcano 39.81
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 8.51
35 TestAddons/parallel/Registry 15.69
36 TestAddons/parallel/RegistryCreds 0.71
37 TestAddons/parallel/Ingress 20.57
38 TestAddons/parallel/InspektorGadget 5.3
39 TestAddons/parallel/MetricsServer 5.85
41 TestAddons/parallel/CSI 35.75
42 TestAddons/parallel/Headlamp 20.65
43 TestAddons/parallel/CloudSpanner 5.55
44 TestAddons/parallel/LocalPath 12.21
45 TestAddons/parallel/NvidiaDevicePlugin 6.51
46 TestAddons/parallel/Yakd 11.74
47 TestAddons/parallel/AmdGpuDevicePlugin 5.56
48 TestAddons/StoppedEnableDisable 12.3
49 TestCertOptions 28.74
50 TestCertExpiration 222.16
52 TestForceSystemdFlag 32.08
53 TestForceSystemdEnv 38.03
54 TestDockerEnvContainerd 39.41
55 TestKVMDriverInstallOrUpdate 1.46
59 TestErrorSpam/setup 22.46
60 TestErrorSpam/start 0.65
61 TestErrorSpam/status 0.93
62 TestErrorSpam/pause 1.59
63 TestErrorSpam/unpause 1.62
64 TestErrorSpam/stop 1.43
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 43.44
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.44
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.85
76 TestFunctional/serial/CacheCmd/cache/add_local 2.01
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 40.51
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.54
87 TestFunctional/serial/LogsFileCmd 1.54
88 TestFunctional/serial/InvalidService 4.54
90 TestFunctional/parallel/ConfigCmd 0.39
91 TestFunctional/parallel/DashboardCmd 11.1
92 TestFunctional/parallel/DryRun 0.44
93 TestFunctional/parallel/InternationalLanguage 0.21
94 TestFunctional/parallel/StatusCmd 1.14
98 TestFunctional/parallel/ServiceCmdConnect 18.71
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 41.56
102 TestFunctional/parallel/SSHCmd 0.58
103 TestFunctional/parallel/CpCmd 1.85
104 TestFunctional/parallel/MySQL 22.83
105 TestFunctional/parallel/FileSync 0.41
106 TestFunctional/parallel/CertSync 2.24
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.63
114 TestFunctional/parallel/License 0.4
115 TestFunctional/parallel/ServiceCmd/DeployApp 8.21
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 17.21
121 TestFunctional/parallel/ServiceCmd/List 0.93
122 TestFunctional/parallel/ServiceCmd/JSONOutput 0.9
123 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
124 TestFunctional/parallel/ServiceCmd/Format 0.34
125 TestFunctional/parallel/ServiceCmd/URL 0.35
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
127 TestFunctional/parallel/ProfileCmd/profile_list 0.39
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
129 TestFunctional/parallel/MountCmd/any-port 12.59
130 TestFunctional/parallel/Version/short 0.06
131 TestFunctional/parallel/Version/components 0.59
132 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
133 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
134 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
135 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
136 TestFunctional/parallel/ImageCommands/ImageBuild 5.12
137 TestFunctional/parallel/ImageCommands/Setup 1.72
138 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.21
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.17
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.89
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.72
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
154 TestFunctional/parallel/MountCmd/specific-port 1.71
155 TestFunctional/parallel/MountCmd/VerifyCleanup 2.05
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 125.1
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.77
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.6
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.79
177 TestMultiControlPlane/serial/StopCluster 24.31
182 TestJSONOutput/start/Command 42.06
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.78
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.64
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 5.75
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.22
207 TestKicCustomNetwork/create_custom_network 34.33
208 TestKicCustomNetwork/use_default_bridge_network 25.65
209 TestKicExistingNetwork 24.73
210 TestKicCustomSubnet 26.04
211 TestKicStaticIP 24.96
212 TestMainNoArgs 0.05
213 TestMinikubeProfile 50.31
216 TestMountStart/serial/StartWithMountFirst 5.58
217 TestMountStart/serial/VerifyMountFirst 0.27
218 TestMountStart/serial/StartWithMountSecond 5.99
219 TestMountStart/serial/VerifyMountSecond 0.27
220 TestMountStart/serial/DeleteFirst 1.68
221 TestMountStart/serial/VerifyMountPostDelete 0.26
222 TestMountStart/serial/Stop 1.29
223 TestMountStart/serial/RestartStopped 7.74
224 TestMountStart/serial/VerifyMountPostStop 0.27
227 TestMultiNode/serial/FreshStart2Nodes 57.96
228 TestMultiNode/serial/DeployApp2Nodes 18.6
229 TestMultiNode/serial/PingHostFrom2Pods 0.79
230 TestMultiNode/serial/AddNode 12.09
231 TestMultiNode/serial/MultiNodeLabels 0.08
232 TestMultiNode/serial/ProfileList 0.69
233 TestMultiNode/serial/CopyFile 9.8
234 TestMultiNode/serial/StopNode 2.2
235 TestMultiNode/serial/StartAfterStop 7.1
236 TestMultiNode/serial/RestartKeepsNodes 70.75
237 TestMultiNode/serial/DeleteNode 5.22
238 TestMultiNode/serial/StopMultiNode 24.1
239 TestMultiNode/serial/RestartMultiNode 45.67
240 TestMultiNode/serial/ValidateNameConflict 23.65
245 TestPreload 133.42
247 TestScheduledStopUnix 98.22
250 TestInsufficientStorage 9.48
251 TestRunningBinaryUpgrade 44.45
253 TestKubernetesUpgrade 340.48
254 TestMissingContainerUpgrade 139.08
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
257 TestNoKubernetes/serial/StartWithK8s 33.31
258 TestNoKubernetes/serial/StartWithStopK8s 28.47
259 TestNoKubernetes/serial/Start 6.59
260 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
261 TestNoKubernetes/serial/ProfileList 1.85
262 TestNoKubernetes/serial/Stop 1.21
263 TestNoKubernetes/serial/StartNoArgs 7.91
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
265 TestStoppedBinaryUpgrade/Setup 3
266 TestStoppedBinaryUpgrade/Upgrade 59.53
267 TestStoppedBinaryUpgrade/MinikubeLogs 1.26
276 TestPause/serial/Start 45.18
284 TestNetworkPlugins/group/false 3.66
288 TestPause/serial/SecondStartNoReconfiguration 7.85
289 TestPause/serial/Pause 0.87
290 TestPause/serial/VerifyStatus 0.34
291 TestPause/serial/Unpause 0.81
292 TestPause/serial/PauseAgain 0.94
293 TestPause/serial/DeletePaused 2.96
295 TestStartStop/group/old-k8s-version/serial/FirstStart 57.44
296 TestPause/serial/VerifyDeletedResources 0.64
298 TestStartStop/group/no-preload/serial/FirstStart 77.7
300 TestStartStop/group/embed-certs/serial/FirstStart 99.33
301 TestStartStop/group/old-k8s-version/serial/DeployApp 8.3
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.97
303 TestStartStop/group/old-k8s-version/serial/Stop 12.12
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
305 TestStartStop/group/old-k8s-version/serial/SecondStart 43.66
306 TestStartStop/group/no-preload/serial/DeployApp 10.32
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.99
308 TestStartStop/group/no-preload/serial/Stop 12.08
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
310 TestStartStop/group/no-preload/serial/SecondStart 86.68
311 TestStartStop/group/embed-certs/serial/DeployApp 9.31
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
315 TestStartStop/group/embed-certs/serial/Stop 12.13
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
317 TestStartStop/group/old-k8s-version/serial/Pause 2.93
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 133.17
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
321 TestStartStop/group/embed-certs/serial/SecondStart 52.01
323 TestStartStop/group/newest-cni/serial/FirstStart 36.33
324 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
326 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
328 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
332 TestNetworkPlugins/group/auto/Start 47.31
333 TestStartStop/group/newest-cni/serial/DeployApp 0
334 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.38
335 TestStartStop/group/newest-cni/serial/Stop 2.35
336 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
337 TestStartStop/group/newest-cni/serial/SecondStart 12.41
338 TestNetworkPlugins/group/kindnet/Start 48.64
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
343 TestNetworkPlugins/group/calico/Start 165.65
344 TestNetworkPlugins/group/auto/KubeletFlags 0.31
345 TestNetworkPlugins/group/auto/NetCatPod 9.23
346 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
347 TestNetworkPlugins/group/auto/DNS 0.14
348 TestNetworkPlugins/group/auto/Localhost 0.12
349 TestNetworkPlugins/group/auto/HairPin 0.12
350 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.3
351 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
352 TestNetworkPlugins/group/kindnet/NetCatPod 9.19
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.04
354 TestNetworkPlugins/group/kindnet/DNS 0.16
355 TestNetworkPlugins/group/kindnet/Localhost 0.12
356 TestNetworkPlugins/group/kindnet/HairPin 0.14
357 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.02
358 TestNetworkPlugins/group/custom-flannel/Start 623.3
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.34
361 TestNetworkPlugins/group/enable-default-cni/Start 104.41
362 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
363 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
364 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
366 TestNetworkPlugins/group/flannel/Start 131.18
367 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
368 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.2
369 TestNetworkPlugins/group/calico/ControllerPod 6.01
370 TestNetworkPlugins/group/calico/KubeletFlags 0.3
371 TestNetworkPlugins/group/calico/NetCatPod 9.22
372 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
373 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
374 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
375 TestNetworkPlugins/group/calico/DNS 0.15
376 TestNetworkPlugins/group/calico/Localhost 0.12
377 TestNetworkPlugins/group/calico/HairPin 0.13
378 TestNetworkPlugins/group/bridge/Start 63.36
379 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
380 TestNetworkPlugins/group/bridge/NetCatPod 9.2
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
383 TestNetworkPlugins/group/flannel/NetCatPod 9.21
384 TestNetworkPlugins/group/bridge/DNS 0.16
385 TestNetworkPlugins/group/bridge/Localhost 0.12
386 TestNetworkPlugins/group/bridge/HairPin 0.14
387 TestNetworkPlugins/group/flannel/DNS 0.16
388 TestNetworkPlugins/group/flannel/Localhost 0.12
389 TestNetworkPlugins/group/flannel/HairPin 0.14
390 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
391 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.2
392 TestNetworkPlugins/group/custom-flannel/DNS 0.15
393 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
394 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (13.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-590307 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-590307 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.878830475s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (13.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0919 22:14:21.092955   18210 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I0919 22:14:21.093058   18210 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-590307
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-590307: exit status 85 (64.139963ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-590307 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-590307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:14:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:14:07.257483   18223 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:14:07.257732   18223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:07.257756   18223 out.go:374] Setting ErrFile to fd 2...
	I0919 22:14:07.257763   18223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:07.257985   18223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	W0919 22:14:07.258184   18223 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21594-14678/.minikube/config/config.json: open /home/jenkins/minikube-integration/21594-14678/.minikube/config/config.json: no such file or directory
	I0919 22:14:07.258695   18223 out.go:368] Setting JSON to true
	I0919 22:14:07.259682   18223 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3391,"bootTime":1758316656,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:14:07.259787   18223 start.go:140] virtualization: kvm guest
	I0919 22:14:07.262436   18223 out.go:99] [download-only-590307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0919 22:14:07.262604   18223 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 22:14:07.262661   18223 notify.go:220] Checking for updates...
	I0919 22:14:07.264405   18223 out.go:171] MINIKUBE_LOCATION=21594
	I0919 22:14:07.266017   18223 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:14:07.268048   18223 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:14:07.272665   18223 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 22:14:07.274719   18223 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0919 22:14:07.277847   18223 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 22:14:07.278149   18223 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:14:07.303046   18223 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:14:07.303178   18223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:07.709305   18223 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-19 22:14:07.699086593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:14:07.709427   18223 docker.go:318] overlay module found
	I0919 22:14:07.711180   18223 out.go:99] Using the docker driver based on user configuration
	I0919 22:14:07.711218   18223 start.go:304] selected driver: docker
	I0919 22:14:07.711226   18223 start.go:918] validating driver "docker" against <nil>
	I0919 22:14:07.711354   18223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:07.770924   18223 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-19 22:14:07.759525492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:14:07.771187   18223 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:14:07.771935   18223 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0919 22:14:07.772178   18223 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 22:14:07.774316   18223 out.go:171] Using Docker driver with root privileges
	I0919 22:14:07.775746   18223 cni.go:84] Creating CNI manager for ""
	I0919 22:14:07.775822   18223 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 22:14:07.775835   18223 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:14:07.775902   18223 start.go:348] cluster config:
	{Name:download-only-590307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-590307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:14:07.777334   18223 out.go:99] Starting "download-only-590307" primary control-plane node in "download-only-590307" cluster
	I0919 22:14:07.777374   18223 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:14:07.778741   18223 out.go:99] Pulling base image v0.0.48 ...
	I0919 22:14:07.778771   18223 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0919 22:14:07.778820   18223 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:14:07.797201   18223 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0919 22:14:07.797399   18223 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0919 22:14:07.797512   18223 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0919 22:14:07.878059   18223 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I0919 22:14:07.878102   18223 cache.go:58] Caching tarball of preloaded images
	I0919 22:14:07.878326   18223 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0919 22:14:07.880520   18223 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0919 22:14:07.880546   18223 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 ...
	I0919 22:14:08.347821   18223 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-590307 host does not exist
	  To start a cluster, run: "minikube start -p download-only-590307"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-590307
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (12.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-403642 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-403642 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.113309587s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (12.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0919 22:14:33.627146   18210 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
I0919 22:14:33.627215   18210 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-403642
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-403642: exit status 85 (62.897297ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-590307 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-590307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:14 UTC │
	│ delete  │ -p download-only-590307                                                                                                                                                               │ download-only-590307 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:14 UTC │
	│ start   │ -o=json --download-only -p download-only-403642 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-403642 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:14:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:14:21.553042   18605 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:14:21.553151   18605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:21.553185   18605 out.go:374] Setting ErrFile to fd 2...
	I0919 22:14:21.553191   18605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:21.553399   18605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:14:21.553891   18605 out.go:368] Setting JSON to true
	I0919 22:14:21.554711   18605 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3406,"bootTime":1758316656,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:14:21.554807   18605 start.go:140] virtualization: kvm guest
	I0919 22:14:21.557025   18605 out.go:99] [download-only-403642] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:14:21.557218   18605 notify.go:220] Checking for updates...
	I0919 22:14:21.558711   18605 out.go:171] MINIKUBE_LOCATION=21594
	I0919 22:14:21.560378   18605 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:14:21.561907   18605 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:14:21.563611   18605 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 22:14:21.565348   18605 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0919 22:14:21.569843   18605 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 22:14:21.570316   18605 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:14:21.596085   18605 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:14:21.596207   18605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:21.656088   18605 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-19 22:14:21.645805218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:14:21.656232   18605 docker.go:318] overlay module found
	I0919 22:14:21.657805   18605 out.go:99] Using the docker driver based on user configuration
	I0919 22:14:21.657838   18605 start.go:304] selected driver: docker
	I0919 22:14:21.657844   18605 start.go:918] validating driver "docker" against <nil>
	I0919 22:14:21.657925   18605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:21.714193   18605 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-19 22:14:21.704249891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:14:21.714350   18605 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:14:21.714827   18605 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0919 22:14:21.714960   18605 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 22:14:21.717180   18605 out.go:171] Using Docker driver with root privileges
	I0919 22:14:21.718638   18605 cni.go:84] Creating CNI manager for ""
	I0919 22:14:21.718707   18605 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0919 22:14:21.718722   18605 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:14:21.718805   18605 start.go:348] cluster config:
	{Name:download-only-403642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-403642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:14:21.720282   18605 out.go:99] Starting "download-only-403642" primary control-plane node in "download-only-403642" cluster
	I0919 22:14:21.720303   18605 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0919 22:14:21.721614   18605 out.go:99] Pulling base image v0.0.48 ...
	I0919 22:14:21.721636   18605 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:14:21.721722   18605 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:14:21.738519   18605 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0919 22:14:21.738650   18605 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0919 22:14:21.738668   18605 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0919 22:14:21.738676   18605 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0919 22:14:21.738683   18605 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0919 22:14:22.047112   18605 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0919 22:14:22.047145   18605 cache.go:58] Caching tarball of preloaded images
	I0919 22:14:22.047315   18605 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0919 22:14:22.049340   18605 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0919 22:14:22.049360   18605 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 ...
	I0919 22:14:22.147068   18605 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2b7b36e7513c2e517ecf49b6f3ce02cf -> /home/jenkins/minikube-integration/21594-14678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-403642 host does not exist
	  To start a cluster, run: "minikube start -p download-only-403642"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-403642
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.17s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-178947 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-178947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-178947
--- PASS: TestDownloadOnlyKic (1.17s)

                                                
                                    
x
+
TestBinaryMirror (0.84s)

                                                
                                                
=== RUN   TestBinaryMirror
I0919 22:14:35.478964   18210 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-833272 --alsologtostderr --binary-mirror http://127.0.0.1:40567 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-833272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-833272
--- PASS: TestBinaryMirror (0.84s)

                                                
                                    
x
+
TestOffline (73.32s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-079762 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-079762 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m10.868059172s)
helpers_test.go:175: Cleaning up "offline-containerd-079762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-079762
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-079762: (2.447909413s)
--- PASS: TestOffline (73.32s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-019551
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-019551: exit status 85 (54.229276ms)

                                                
                                                
-- stdout --
	* Profile "addons-019551" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-019551"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-019551
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-019551: exit status 85 (56.88907ms)

                                                
                                                
-- stdout --
	* Profile "addons-019551" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-019551"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (155.32s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-019551 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-019551 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m35.318067525s)
--- PASS: TestAddons/Setup (155.32s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.81s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 15.175598ms
addons_test.go:876: volcano-admission stabilized in 15.256546ms
addons_test.go:884: volcano-controller stabilized in 15.304568ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-t4xpp" [eaa0f1ca-d11b-4bf9-a691-335c65600e6f] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003467087s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-tfpbj" [cc08bea9-e308-41ce-ba70-16ee0958b76f] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003185972s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-6lzkj" [8522a541-73fd-4c1a-a686-5c3a9f722fff] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003499316s
addons_test.go:903: (dbg) Run:  kubectl --context addons-019551 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-019551 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-019551 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [49a7a6d7-21b3-4c8c-8ae7-daeef9580888] Pending
helpers_test.go:352: "test-job-nginx-0" [49a7a6d7-21b3-4c8c-8ae7-daeef9580888] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [49a7a6d7-21b3-4c8c-8ae7-daeef9580888] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003750362s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-019551 addons disable volcano --alsologtostderr -v=1: (11.40925992s)
--- PASS: TestAddons/serial/Volcano (39.81s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-019551 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-019551 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-019551 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-019551 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6cb3bfea-078d-46fe-8e2f-8a61c550bcf4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6cb3bfea-078d-46fe-8e2f-8a61c550bcf4] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003625207s
addons_test.go:694: (dbg) Run:  kubectl --context addons-019551 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-019551 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-019551 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.72125ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-5tgqx" [53f0452a-d053-45fe-9733-c43ff0601b07] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003469747s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-v5fdh" [adae146c-b0ab-490e-963a-eeb5a73e7c20] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002760637s
addons_test.go:392: (dbg) Run:  kubectl --context addons-019551 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-019551 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-019551 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.834669902s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 ip
2025/09/19 22:18:24 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.69s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.653932ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-019551
addons_test.go:332: (dbg) Run:  kubectl --context addons-019551 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-019551 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-019551 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-019551 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [94ee0c78-c3b0-4447-9ba6-ef015c460be0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [94ee0c78-c3b0-4447-9ba6-ef015c460be0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004151646s
I0919 22:18:19.665046   18210 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-019551 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-019551 addons disable ingress-dns --alsologtostderr -v=1: (1.479690926s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-019551 addons disable ingress --alsologtostderr -v=1: (7.75615925s)
--- PASS: TestAddons/parallel/Ingress (20.57s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-zhjfn" [2daa78f8-f672-455a-8eff-c5d2fa3b95b4] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004698721s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.658611ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-5p22z" [4f30bad2-ccfa-484d-8c5c-50535feb1ff1] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003521837s
addons_test.go:463: (dbg) Run:  kubectl --context addons-019551 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (35.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0919 22:18:33.034318   18210 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0919 22:18:33.037616   18210 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0919 22:18:33.037648   18210 kapi.go:107] duration metric: took 3.3343ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.348004ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-019551 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-019551 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [f1e44903-b0c6-4412-94e6-83f0b0f95787] Pending
helpers_test.go:352: "task-pv-pod" [f1e44903-b0c6-4412-94e6-83f0b0f95787] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [f1e44903-b0c6-4412-94e6-83f0b0f95787] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003219376s
addons_test.go:572: (dbg) Run:  kubectl --context addons-019551 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-019551 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-019551 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-019551 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-019551 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-019551 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-019551 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [3abd26c1-f8ca-4d97-ad92-c25f42bd9374] Pending
helpers_test.go:352: "task-pv-pod-restore" [3abd26c1-f8ca-4d97-ad92-c25f42bd9374] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [3abd26c1-f8ca-4d97-ad92-c25f42bd9374] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004069428s
addons_test.go:614: (dbg) Run:  kubectl --context addons-019551 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-019551 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-019551 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-019551 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.614778111s)
--- PASS: TestAddons/parallel/CSI (35.75s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-019551 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-qmgzs" [580e9f9e-5f5f-4d1e-801c-1222fad9f7a7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-qmgzs" [580e9f9e-5f5f-4d1e-801c-1222fad9f7a7] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.002973983s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-019551 addons disable headlamp --alsologtostderr -v=1: (5.758695898s)
--- PASS: TestAddons/parallel/Headlamp (20.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-tkddb" [77ba530d-e5ec-4fc2-b4ec-10ded07b3e7e] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004062183s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.21s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-019551 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-019551 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-019551 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [15e1ac80-1d97-4896-9cba-a396d603a832] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [15e1ac80-1d97-4896-9cba-a396d603a832] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [15e1ac80-1d97-4896-9cba-a396d603a832] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003766456s
addons_test.go:967: (dbg) Run:  kubectl --context addons-019551 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 ssh "cat /opt/local-path-provisioner/pvc-e1174f38-c659-40f1-885b-7d850199ff9d_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-019551 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-019551 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.21s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-j24v4" [43d36b11-c448-418f-a41a-6ca8c1681d03] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003722121s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-4bprt" [436d9e71-e22b-4fe4-a61d-57873810ae10] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003375583s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-019551 addons disable yakd --alsologtostderr -v=1: (5.740305453s)
--- PASS: TestAddons/parallel/Yakd (11.74s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-5xcc2" [33ca08a7-5a42-4b66-856a-af4abc6e1bcd] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003818432s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-019551 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-019551
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-019551: (12.038095506s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-019551
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-019551
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-019551
--- PASS: TestAddons/StoppedEnableDisable (12.30s)

                                                
                                    
x
+
TestCertOptions (28.74s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-757919 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-757919 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (25.276174049s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-757919 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-757919 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-757919 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-757919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-757919
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-757919: (2.726366389s)
--- PASS: TestCertOptions (28.74s)

                                                
                                    
x
+
TestCertExpiration (222.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-175441 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-175441 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (33.348355677s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-175441 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-175441 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.163257739s)
helpers_test.go:175: Cleaning up "cert-expiration-175441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-175441
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-175441: (2.647182236s)
--- PASS: TestCertExpiration (222.16s)

                                                
                                    
x
+
TestForceSystemdFlag (32.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-127407 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-127407 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (29.008895179s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-127407 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-127407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-127407
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-127407: (2.74881929s)
--- PASS: TestForceSystemdFlag (32.08s)

                                                
                                    
x
+
TestForceSystemdEnv (38.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-138882 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-138882 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.16485541s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-138882 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-138882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-138882
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-138882: (2.557665234s)
--- PASS: TestForceSystemdEnv (38.03s)

                                                
                                    
x
+
TestDockerEnvContainerd (39.41s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-600724 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-600724 --driver=docker  --container-runtime=containerd: (22.372881271s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-600724"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-600724": (1.012492032s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXAoliB8/agent.44031" SSH_AGENT_PID="44032" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXAoliB8/agent.44031" SSH_AGENT_PID="44032" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXAoliB8/agent.44031" SSH_AGENT_PID="44032" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (2.609330837s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXAoliB8/agent.44031" SSH_AGENT_PID="44032" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-600724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-600724
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-600724: (2.401687017s)
--- PASS: TestDockerEnvContainerd (39.41s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.46s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.46s)

                                                
                                    
x
+
TestErrorSpam/setup (22.46s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-786654 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-786654 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-786654 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-786654 --driver=docker  --container-runtime=containerd: (22.462868322s)
--- PASS: TestErrorSpam/setup (22.46s)

                                                
                                    
x
+
TestErrorSpam/start (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786654 --log_dir /tmp/nospam-786654 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786654 --log_dir /tmp/nospam-786654 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786654 --log_dir /tmp/nospam-786654 start --dry-run
--- PASS: TestErrorSpam/start (0.65s)

                                                
                                    
x
+
TestErrorSpam/status (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786654 --log_dir /tmp/nospam-786654 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786654 --log_dir /tmp/nospam-786654 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786654 --log_dir /tmp/nospam-786654 status
--- PASS: TestErrorSpam/status (0.93s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786654 --log_dir /tmp/nospam-786654 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786654 --log_dir /tmp/nospam-786654 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786654 --log_dir /tmp/nospam-786654 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786654 --log_dir /tmp/nospam-786654 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786654 --log_dir /tmp/nospam-786654 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786654 --log_dir /tmp/nospam-786654 unpause
--- PASS: TestErrorSpam/unpause (1.62s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786654 --log_dir /tmp/nospam-786654 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-786654 --log_dir /tmp/nospam-786654 stop: (1.235822741s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786654 --log_dir /tmp/nospam-786654 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786654 --log_dir /tmp/nospam-786654 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21594-14678/.minikube/files/etc/test/nested/copy/18210/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.44s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-541880 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-541880 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (43.436785932s)
--- PASS: TestFunctional/serial/StartWithProxy (43.44s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0919 22:21:22.829335   18210 config.go:182] Loaded profile config "functional-541880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-541880 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-541880 --alsologtostderr -v=8: (6.436999774s)
functional_test.go:678: soft start took 6.43800642s for "functional-541880" cluster.
I0919 22:21:29.267089   18210 config.go:182] Loaded profile config "functional-541880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (6.44s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-541880 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-541880 cache add registry.k8s.io/pause:3.3: (1.065096251s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-541880 /tmp/TestFunctionalserialCacheCmdcacheadd_local3301257974/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 cache add minikube-local-cache-test:functional-541880
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-541880 cache add minikube-local-cache-test:functional-541880: (1.619721902s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 cache delete minikube-local-cache-test:functional-541880
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-541880
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-541880 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (292.945931ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 kubectl -- --context functional-541880 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-541880 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.51s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-541880 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0919 22:22:11.708610   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:11.715130   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:11.726593   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:11.748105   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:11.789512   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:11.871058   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:12.032647   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:12.354503   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:12.996613   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:14.278298   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:16.841294   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-541880 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.513998366s)
functional_test.go:776: restart took 40.514135371s for "functional-541880" cluster.
I0919 22:22:17.177785   18210 config.go:182] Loaded profile config "functional-541880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (40.51s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-541880 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-541880 logs: (1.540678727s)
--- PASS: TestFunctional/serial/LogsCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 logs --file /tmp/TestFunctionalserialLogsFileCmd4221865179/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-541880 logs --file /tmp/TestFunctionalserialLogsFileCmd4221865179/001/logs.txt: (1.536629447s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.54s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-541880 apply -f testdata/invalidsvc.yaml
E0919 22:22:21.962717   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-541880
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-541880: exit status 115 (352.260645ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31147 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-541880 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-541880 config get cpus: exit status 14 (76.150236ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-541880 config get cpus: exit status 14 (58.88453ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-541880 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-541880 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 60625: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.10s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-541880 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-541880 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (192.549763ms)

                                                
                                                
-- stdout --
	* [functional-541880] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:22:25.529448   59575 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:22:25.529739   59575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:22:25.529750   59575 out.go:374] Setting ErrFile to fd 2...
	I0919 22:22:25.529754   59575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:22:25.529982   59575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:22:25.530452   59575 out.go:368] Setting JSON to false
	I0919 22:22:25.531737   59575 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3890,"bootTime":1758316656,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:22:25.531866   59575 start.go:140] virtualization: kvm guest
	I0919 22:22:25.534082   59575 out.go:179] * [functional-541880] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:22:25.537114   59575 notify.go:220] Checking for updates...
	I0919 22:22:25.537146   59575 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:22:25.538847   59575 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:22:25.540603   59575 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:22:25.541938   59575 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 22:22:25.543916   59575 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:22:25.545414   59575 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:22:25.547283   59575 config.go:182] Loaded profile config "functional-541880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:22:25.548008   59575 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:22:25.581473   59575 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:22:25.581613   59575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:22:25.657484   59575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-19 22:22:25.640547798 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:22:25.657641   59575 docker.go:318] overlay module found
	I0919 22:22:25.661434   59575 out.go:179] * Using the docker driver based on existing profile
	I0919 22:22:25.663574   59575 start.go:304] selected driver: docker
	I0919 22:22:25.663603   59575 start.go:918] validating driver "docker" against &{Name:functional-541880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-541880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:22:25.663743   59575 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:22:25.666328   59575 out.go:203] 
	W0919 22:22:25.668025   59575 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0919 22:22:25.669370   59575 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-541880 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-541880 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-541880 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (214.420138ms)

                                                
                                                
-- stdout --
	* [functional-541880] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:22:25.332048   59315 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:22:25.332369   59315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:22:25.332383   59315 out.go:374] Setting ErrFile to fd 2...
	I0919 22:22:25.332389   59315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:22:25.332842   59315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:22:25.333353   59315 out.go:368] Setting JSON to false
	I0919 22:22:25.335184   59315 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3889,"bootTime":1758316656,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:22:25.335352   59315 start.go:140] virtualization: kvm guest
	I0919 22:22:25.338926   59315 out.go:179] * [functional-541880] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0919 22:22:25.340712   59315 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:22:25.340847   59315 notify.go:220] Checking for updates...
	I0919 22:22:25.344561   59315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:22:25.348392   59315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 22:22:25.351569   59315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 22:22:25.353540   59315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:22:25.356482   59315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:22:25.358862   59315 config.go:182] Loaded profile config "functional-541880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:22:25.359601   59315 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:22:25.396233   59315 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:22:25.396322   59315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:22:25.465782   59315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-19 22:22:25.454719743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:22:25.465885   59315 docker.go:318] overlay module found
	I0919 22:22:25.470264   59315 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0919 22:22:25.472302   59315 start.go:304] selected driver: docker
	I0919 22:22:25.472326   59315 start.go:918] validating driver "docker" against &{Name:functional-541880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-541880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:22:25.472421   59315 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:22:25.474586   59315 out.go:203] 
	W0919 22:22:25.476069   59315 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 22:22:25.477699   59315 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (18.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-541880 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-541880 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-vv2gc" [0b095571-236a-4e65-bb00-0bc338f79e5d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-vv2gc" [0b095571-236a-4e65-bb00-0bc338f79e5d] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 18.003895979s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32274
functional_test.go:1680: http://192.168.49.2:32274: success! body:
Request served by hello-node-connect-7d85dfc575-vv2gc

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32274
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (18.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [633a56cf-667d-437b-acd2-778ef528be88] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003806882s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-541880 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-541880 apply -f testdata/storage-provisioner/pvc.yaml
E0919 22:22:32.204116   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-541880 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-541880 apply -f testdata/storage-provisioner/pod.yaml
I0919 22:22:32.472450   18210 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [dca7bd7c-2c35-451f-a63e-ff7acfe940b8] Pending
helpers_test.go:352: "sp-pod" [dca7bd7c-2c35-451f-a63e-ff7acfe940b8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [dca7bd7c-2c35-451f-a63e-ff7acfe940b8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.003678192s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-541880 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-541880 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-541880 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [1ebcda05-0885-4c6c-857d-b46e379f1cee] Pending
helpers_test.go:352: "sp-pod" [1ebcda05-0885-4c6c-857d-b46e379f1cee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0919 22:22:52.686262   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [1ebcda05-0885-4c6c-857d-b46e379f1cee] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004006122s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-541880 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.56s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh -n functional-541880 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 cp functional-541880:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2663062343/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh -n functional-541880 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh -n functional-541880 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-541880 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-2r6cb" [baae533d-afc6-484b-9941-97f10d7f0274] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-2r6cb" [baae533d-afc6-484b-9941-97f10d7f0274] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.003791293s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-541880 exec mysql-5bb876957f-2r6cb -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-541880 exec mysql-5bb876957f-2r6cb -- mysql -ppassword -e "show databases;": exit status 1 (136.11379ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 22:23:04.332935   18210 retry.go:31] will retry after 929.972241ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-541880 exec mysql-5bb876957f-2r6cb -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-541880 exec mysql-5bb876957f-2r6cb -- mysql -ppassword -e "show databases;": exit status 1 (118.27479ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 22:23:05.381587   18210 retry.go:31] will retry after 1.746233679s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-541880 exec mysql-5bb876957f-2r6cb -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-541880 exec mysql-5bb876957f-2r6cb -- mysql -ppassword -e "show databases;": exit status 1 (110.528054ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 22:23:07.238821   18210 retry.go:31] will retry after 2.478360177s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-541880 exec mysql-5bb876957f-2r6cb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.83s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/18210/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "sudo cat /etc/test/nested/copy/18210/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/18210.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "sudo cat /etc/ssl/certs/18210.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/18210.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "sudo cat /usr/share/ca-certificates/18210.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/182102.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "sudo cat /etc/ssl/certs/182102.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/182102.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "sudo cat /usr/share/ca-certificates/182102.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-541880 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-541880 ssh "sudo systemctl is-active docker": exit status 1 (323.203742ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-541880 ssh "sudo systemctl is-active crio": exit status 1 (309.67998ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-541880 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-541880 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-zgpcp" [39215590-db03-49e8-820b-f128ebb491aa] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-zgpcp" [39215590-db03-49e8-820b-f128ebb491aa] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003404899s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-541880 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-541880 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-541880 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 61053: os: process already finished
helpers_test.go:519: unable to terminate pid 60734: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-541880 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-541880 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-541880 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [0afb7153-c48f-497a-bb5a-6dad436dc6f0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [0afb7153-c48f-497a-bb5a-6dad436dc6f0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 17.003540548s
I0919 22:22:44.955296   18210 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 service list -o json
functional_test.go:1504: Took "895.19978ms" to run "out/minikube-linux-amd64 -p functional-541880 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30590
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30590
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "339.985327ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "51.622013ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
2025/09/19 22:22:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1381: Took "370.420265ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "67.824269ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-541880 /tmp/TestFunctionalparallelMountCmdany-port3849191302/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1758320557024339447" to /tmp/TestFunctionalparallelMountCmdany-port3849191302/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1758320557024339447" to /tmp/TestFunctionalparallelMountCmdany-port3849191302/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1758320557024339447" to /tmp/TestFunctionalparallelMountCmdany-port3849191302/001/test-1758320557024339447
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-541880 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (294.560102ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 22:22:37.319304   18210 retry.go:31] will retry after 303.861858ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 19 22:22 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 19 22:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 19 22:22 test-1758320557024339447
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh cat /mount-9p/test-1758320557024339447
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-541880 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [96afcbb2-6029-4a11-88a0-3266f5554978] Pending
helpers_test.go:352: "busybox-mount" [96afcbb2-6029-4a11-88a0-3266f5554978] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [96afcbb2-6029-4a11-88a0-3266f5554978] Running
helpers_test.go:352: "busybox-mount" [96afcbb2-6029-4a11-88a0-3266f5554978] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [96afcbb2-6029-4a11-88a0-3266f5554978] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 10.003499018s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-541880 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-541880 /tmp/TestFunctionalparallelMountCmdany-port3849191302/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-541880 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-541880
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-541880
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-541880 image ls --format short --alsologtostderr:
I0919 22:22:54.074632   67368 out.go:360] Setting OutFile to fd 1 ...
I0919 22:22:54.074769   67368 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:22:54.074784   67368 out.go:374] Setting ErrFile to fd 2...
I0919 22:22:54.074791   67368 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:22:54.075120   67368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
I0919 22:22:54.076030   67368 config.go:182] Loaded profile config "functional-541880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0919 22:22:54.076189   67368 config.go:182] Loaded profile config "functional-541880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0919 22:22:54.076825   67368 cli_runner.go:164] Run: docker container inspect functional-541880 --format={{.State.Status}}
I0919 22:22:54.097122   67368 ssh_runner.go:195] Run: systemctl --version
I0919 22:22:54.097195   67368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-541880
I0919 22:22:54.119409   67368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/functional-541880/id_rsa Username:docker}
I0919 22:22:54.221477   67368 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-541880 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy                  │ v1.34.0            │ sha256:df0860 │ 26MB   │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0            │ sha256:a0af72 │ 22.8MB │
│ docker.io/library/nginx                     │ latest             │ sha256:41f689 │ 72.3MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.0            │ sha256:90550c │ 27.1MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.0            │ sha256:46169d │ 17.4MB │
│ docker.io/kicbase/echo-server               │ functional-541880  │ sha256:9056ab │ 2.37MB │
│ docker.io/library/minikube-local-cache-test │ functional-541880  │ sha256:d16882 │ 992B   │
│ docker.io/library/nginx                     │ alpine             │ sha256:4a8601 │ 22.5MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-541880 image ls --format table --alsologtostderr:
I0919 22:22:54.581080   67472 out.go:360] Setting OutFile to fd 1 ...
I0919 22:22:54.581409   67472 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:22:54.581422   67472 out.go:374] Setting ErrFile to fd 2...
I0919 22:22:54.581426   67472 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:22:54.581652   67472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
I0919 22:22:54.582276   67472 config.go:182] Loaded profile config "functional-541880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0919 22:22:54.582367   67472 config.go:182] Loaded profile config "functional-541880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0919 22:22:54.582785   67472 cli_runner.go:164] Run: docker container inspect functional-541880 --format={{.State.Status}}
I0919 22:22:54.605492   67472 ssh_runner.go:195] Run: systemctl --version
I0919 22:22:54.605569   67472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-541880
I0919 22:22:54.626550   67472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/functional-541880/id_rsa Username:docker}
I0919 22:22:54.725206   67472 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-541880 image ls --format json --alsologtostderr:
[{"id":"sha256:d16882a2967e409fd5f6b2cc15bd118dac54a7e1cc27af98affeadbbabfe80da","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-541880"],"size":"992"},{"id":"sha256:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22477192"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:0184c1613d92931126feb4c548e5
da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"22819719"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"25963701"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoT
ags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-541880"],"size":"2372971"},{"id":"sha256:41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81","repoDigests":["docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e"],"repoTags":["docker.io/library/nginx:latest"],"size":"72319182"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provision
er@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"27066504"},{"id":"sha256:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"17385558"},{"id":"sha256:409467f978b4a30fe717012736557d637
f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-541880 image ls --format json --alsologtostderr:
I0919 22:22:54.330486   67424 out.go:360] Setting OutFile to fd 1 ...
I0919 22:22:54.330809   67424 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:22:54.330820   67424 out.go:374] Setting ErrFile to fd 2...
I0919 22:22:54.330827   67424 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:22:54.331095   67424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
I0919 22:22:54.331912   67424 config.go:182] Loaded profile config "functional-541880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0919 22:22:54.332059   67424 config.go:182] Loaded profile config "functional-541880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0919 22:22:54.332537   67424 cli_runner.go:164] Run: docker container inspect functional-541880 --format={{.State.Status}}
I0919 22:22:54.356274   67424 ssh_runner.go:195] Run: systemctl --version
I0919 22:22:54.356352   67424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-541880
I0919 22:22:54.378596   67424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/functional-541880/id_rsa Username:docker}
I0919 22:22:54.476895   67424 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-541880 image ls --format yaml --alsologtostderr:
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "25963701"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:d16882a2967e409fd5f6b2cc15bd118dac54a7e1cc27af98affeadbbabfe80da
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-541880
size: "992"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "27066504"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-541880
size: "2372971"
- id: sha256:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
repoTags:
- docker.io/library/nginx:alpine
size: "22477192"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81
repoDigests:
- docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e
repoTags:
- docker.io/library/nginx:latest
size: "72319182"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "22819719"
- id: sha256:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "17385558"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-541880 image ls --format yaml --alsologtostderr:
I0919 22:22:54.832725   67521 out.go:360] Setting OutFile to fd 1 ...
I0919 22:22:54.833019   67521 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:22:54.833029   67521 out.go:374] Setting ErrFile to fd 2...
I0919 22:22:54.833033   67521 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:22:54.833356   67521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
I0919 22:22:54.834140   67521 config.go:182] Loaded profile config "functional-541880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0919 22:22:54.834299   67521 config.go:182] Loaded profile config "functional-541880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0919 22:22:54.834868   67521 cli_runner.go:164] Run: docker container inspect functional-541880 --format={{.State.Status}}
I0919 22:22:54.856133   67521 ssh_runner.go:195] Run: systemctl --version
I0919 22:22:54.856200   67521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-541880
I0919 22:22:54.878552   67521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/functional-541880/id_rsa Username:docker}
I0919 22:22:54.975730   67521 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-541880 ssh pgrep buildkitd: exit status 1 (288.125928ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image build -t localhost/my-image:functional-541880 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-541880 image build -t localhost/my-image:functional-541880 testdata/build --alsologtostderr: (4.598656184s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-541880 image build -t localhost/my-image:functional-541880 testdata/build --alsologtostderr:
I0919 22:22:55.371542   67669 out.go:360] Setting OutFile to fd 1 ...
I0919 22:22:55.371692   67669 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:22:55.371702   67669 out.go:374] Setting ErrFile to fd 2...
I0919 22:22:55.371707   67669 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:22:55.371957   67669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
I0919 22:22:55.372590   67669 config.go:182] Loaded profile config "functional-541880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0919 22:22:55.373330   67669 config.go:182] Loaded profile config "functional-541880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0919 22:22:55.373768   67669 cli_runner.go:164] Run: docker container inspect functional-541880 --format={{.State.Status}}
I0919 22:22:55.393842   67669 ssh_runner.go:195] Run: systemctl --version
I0919 22:22:55.393905   67669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-541880
I0919 22:22:55.415182   67669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/functional-541880/id_rsa Username:docker}
I0919 22:22:55.512475   67669 build_images.go:161] Building image from path: /tmp/build.496726905.tar
I0919 22:22:55.512566   67669 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0919 22:22:55.525194   67669 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.496726905.tar
I0919 22:22:55.529465   67669 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.496726905.tar: stat -c "%s %y" /var/lib/minikube/build/build.496726905.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.496726905.tar': No such file or directory
I0919 22:22:55.529518   67669 ssh_runner.go:362] scp /tmp/build.496726905.tar --> /var/lib/minikube/build/build.496726905.tar (3072 bytes)
I0919 22:22:55.561022   67669 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.496726905
I0919 22:22:55.573177   67669 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.496726905 -xf /var/lib/minikube/build/build.496726905.tar
I0919 22:22:55.585321   67669 containerd.go:394] Building image: /var/lib/minikube/build/build.496726905
I0919 22:22:55.585393   67669 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.496726905 --local dockerfile=/var/lib/minikube/build/build.496726905 --output type=image,name=localhost/my-image:functional-541880
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:556778f3f5f0179469d289a7e380aaf61f3df1eb5e57851da39095e6bbf0ef6f done
#8 exporting config sha256:da4fea3f3f5c04ec7032e54749ed1b294d56ea84f33cf2ea64e3127bf4c3de32 0.0s done
#8 naming to localhost/my-image:functional-541880 done
#8 DONE 0.1s
I0919 22:22:59.890166   67669 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.496726905 --local dockerfile=/var/lib/minikube/build/build.496726905 --output type=image,name=localhost/my-image:functional-541880: (4.304722874s)
I0919 22:22:59.890251   67669 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.496726905
I0919 22:22:59.901382   67669 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.496726905.tar
I0919 22:22:59.911939   67669 build_images.go:217] Built localhost/my-image:functional-541880 from /tmp/build.496726905.tar
I0919 22:22:59.911975   67669 build_images.go:133] succeeded building to: functional-541880
I0919 22:22:59.911981   67669 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.6999811s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-541880
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image load --daemon kicbase/echo-server:functional-541880 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image load --daemon kicbase/echo-server:functional-541880 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-541880
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image load --daemon kicbase/echo-server:functional-541880 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image save kicbase/echo-server:functional-541880 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image rm kicbase/echo-server:functional-541880 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-541880 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.211.195 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-541880 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-541880
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 image save --daemon kicbase/echo-server:functional-541880 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-541880
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-541880 /tmp/TestFunctionalparallelMountCmdspecific-port3464559708/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-541880 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (274.31036ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 22:22:49.886593   18210 retry.go:31] will retry after 357.69884ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-541880 /tmp/TestFunctionalparallelMountCmdspecific-port3464559708/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-541880 ssh "sudo umount -f /mount-9p": exit status 1 (283.023055ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-541880 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-541880 /tmp/TestFunctionalparallelMountCmdspecific-port3464559708/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-541880 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2300235359/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-541880 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2300235359/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-541880 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2300235359/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "findmnt -T" /mount1
I0919 22:22:51.451485   18210 detect.go:223] nested VM detected
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-541880 ssh "findmnt -T" /mount1: exit status 1 (400.330225ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 22:22:51.726857   18210 retry.go:31] will retry after 702.02749ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-541880 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-541880 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-541880 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2300235359/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-541880 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2300235359/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-541880 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2300235359/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.05s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-541880
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-541880
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-541880
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (125.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0919 22:23:33.648343   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:24:55.570066   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-326307 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m4.365166009s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (125.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-326307 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 stop --alsologtostderr -v 5
E0919 22:47:11.700306   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-326307 stop --alsologtostderr -v 5: (24.199988976s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326307 status --alsologtostderr -v 5: exit status 7 (110.960696ms)

                                                
                                                
-- stdout --
	ha-326307
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-326307-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-326307-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:47:22.283879  117291 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:47:22.284001  117291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:47:22.284006  117291 out.go:374] Setting ErrFile to fd 2...
	I0919 22:47:22.284010  117291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:47:22.284223  117291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:47:22.284398  117291 out.go:368] Setting JSON to false
	I0919 22:47:22.284417  117291 mustload.go:65] Loading cluster: ha-326307
	I0919 22:47:22.284570  117291 notify.go:220] Checking for updates...
	I0919 22:47:22.284902  117291 config.go:182] Loaded profile config "ha-326307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:47:22.284930  117291 status.go:174] checking status of ha-326307 ...
	I0919 22:47:22.285446  117291 cli_runner.go:164] Run: docker container inspect ha-326307 --format={{.State.Status}}
	I0919 22:47:22.305945  117291 status.go:371] ha-326307 host status = "Stopped" (err=<nil>)
	I0919 22:47:22.305976  117291 status.go:384] host is not running, skipping remaining checks
	I0919 22:47:22.305984  117291 status.go:176] ha-326307 status: &{Name:ha-326307 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:47:22.306014  117291 status.go:174] checking status of ha-326307-m02 ...
	I0919 22:47:22.306340  117291 cli_runner.go:164] Run: docker container inspect ha-326307-m02 --format={{.State.Status}}
	I0919 22:47:22.325882  117291 status.go:371] ha-326307-m02 host status = "Stopped" (err=<nil>)
	I0919 22:47:22.325912  117291 status.go:384] host is not running, skipping remaining checks
	I0919 22:47:22.325921  117291 status.go:176] ha-326307-m02 status: &{Name:ha-326307-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:47:22.325947  117291 status.go:174] checking status of ha-326307-m04 ...
	I0919 22:47:22.326285  117291 cli_runner.go:164] Run: docker container inspect ha-326307-m04 --format={{.State.Status}}
	I0919 22:47:22.345692  117291 status.go:371] ha-326307-m04 host status = "Stopped" (err=<nil>)
	I0919 22:47:22.345723  117291 status.go:384] host is not running, skipping remaining checks
	I0919 22:47:22.345732  117291 status.go:176] ha-326307-m04 status: &{Name:ha-326307-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.31s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.06s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-226247 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-226247 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (42.056287557s)
--- PASS: TestJSONOutput/start/Command (42.06s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-226247 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-226247 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-226247 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-226247 --output=json --user=testUser: (5.753227967s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-525605 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-525605 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (70.213367ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8aceb060-f287-4e99-b62c-916fc93070c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-525605] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"79db2028-e77c-487d-8292-bb5cf16beb50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21594"}}
	{"specversion":"1.0","id":"b16e5fd5-2a81-44ec-af98-b1140f803c1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"92c6ccc0-758a-4af8-ad5f-cf6f1dd474e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig"}}
	{"specversion":"1.0","id":"a68d5488-8beb-4b1f-b1f0-7ad862ee7c15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube"}}
	{"specversion":"1.0","id":"5e895fb1-4d1d-4f04-a2ec-29ff15fbfcfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7ca283b0-ed18-4575-bb60-4b2cf216598a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cc23b0fd-da6e-43d6-8f83-bbe3aacd6b74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-525605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-525605
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-524506 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-524506 --network=: (32.162224228s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-524506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-524506
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-524506: (2.1474466s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.33s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.65s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-168062 --network=bridge
E0919 22:55:14.778870   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-168062 --network=bridge: (23.646928246s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-168062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-168062
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-168062: (1.976690965s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.65s)

                                                
                                    
x
+
TestKicExistingNetwork (24.73s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0919 22:55:17.953991   18210 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0919 22:55:17.971843   18210 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0919 22:55:17.971910   18210 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0919 22:55:17.971929   18210 cli_runner.go:164] Run: docker network inspect existing-network
W0919 22:55:17.989436   18210 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0919 22:55:17.989467   18210 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0919 22:55:17.989480   18210 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0919 22:55:17.989646   18210 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0919 22:55:18.008406   18210 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-465af21e2d8d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:b5:e0:20:10:48} reservation:<nil>}
I0919 22:55:18.008954   18210 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016eaf00}
I0919 22:55:18.008988   18210 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0919 22:55:18.009038   18210 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0919 22:55:18.067085   18210 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-105834 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-105834 --network=existing-network: (22.599756872s)
helpers_test.go:175: Cleaning up "existing-network-105834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-105834
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-105834: (1.984215641s)
I0919 22:55:42.669296   18210 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.73s)

                                                
                                    
x
+
TestKicCustomSubnet (26.04s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-178026 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-178026 --subnet=192.168.60.0/24: (23.83179004s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-178026 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-178026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-178026
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-178026: (2.181991267s)
--- PASS: TestKicCustomSubnet (26.04s)

                                                
                                    
x
+
TestKicStaticIP (24.96s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-312958 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-312958 --static-ip=192.168.200.200: (22.650298043s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-312958 ip
helpers_test.go:175: Cleaning up "static-ip-312958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-312958
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-312958: (2.165507693s)
--- PASS: TestKicStaticIP (24.96s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (50.31s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-407992 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-407992 --driver=docker  --container-runtime=containerd: (21.697673038s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-421913 --driver=docker  --container-runtime=containerd
E0919 22:57:11.705352   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-421913 --driver=docker  --container-runtime=containerd: (22.590440245s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-407992
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-421913
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-421913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-421913
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-421913: (2.390379442s)
helpers_test.go:175: Cleaning up "first-407992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-407992
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-407992: (2.397896044s)
--- PASS: TestMinikubeProfile (50.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-650032 --memory=3072 --mount-string /tmp/TestMountStartserial347269316/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0919 22:57:25.077443   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-650032 --memory=3072 --mount-string /tmp/TestMountStartserial347269316/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.577465805s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-650032 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-661644 --memory=3072 --mount-string /tmp/TestMountStartserial347269316/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-661644 --memory=3072 --mount-string /tmp/TestMountStartserial347269316/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.98753848s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-661644 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-650032 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-650032 --alsologtostderr -v=5: (1.675931877s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-661644 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-661644
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-661644: (1.294359916s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.74s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-661644
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-661644: (6.741996324s)
--- PASS: TestMountStart/serial/RestartStopped (7.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-661644 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (57.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-204967 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-204967 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (57.477401889s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (57.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (18.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204967 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204967 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-204967 -- rollout status deployment/busybox: (17.109750455s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204967 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204967 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204967 -- exec busybox-7b57f96db7-wc9t7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204967 -- exec busybox-7b57f96db7-wsgnn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204967 -- exec busybox-7b57f96db7-wc9t7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204967 -- exec busybox-7b57f96db7-wsgnn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204967 -- exec busybox-7b57f96db7-wc9t7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204967 -- exec busybox-7b57f96db7-wsgnn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (18.60s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204967 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204967 -- exec busybox-7b57f96db7-wc9t7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204967 -- exec busybox-7b57f96db7-wc9t7 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204967 -- exec busybox-7b57f96db7-wsgnn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-204967 -- exec busybox-7b57f96db7-wsgnn -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (12.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-204967 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-204967 -v=5 --alsologtostderr: (11.418524558s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (12.09s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-204967 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 cp testdata/cp-test.txt multinode-204967:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 cp multinode-204967:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3025127541/001/cp-test_multinode-204967.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 cp multinode-204967:/home/docker/cp-test.txt multinode-204967-m02:/home/docker/cp-test_multinode-204967_multinode-204967-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967-m02 "sudo cat /home/docker/cp-test_multinode-204967_multinode-204967-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 cp multinode-204967:/home/docker/cp-test.txt multinode-204967-m03:/home/docker/cp-test_multinode-204967_multinode-204967-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967-m03 "sudo cat /home/docker/cp-test_multinode-204967_multinode-204967-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 cp testdata/cp-test.txt multinode-204967-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 cp multinode-204967-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3025127541/001/cp-test_multinode-204967-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 cp multinode-204967-m02:/home/docker/cp-test.txt multinode-204967:/home/docker/cp-test_multinode-204967-m02_multinode-204967.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967 "sudo cat /home/docker/cp-test_multinode-204967-m02_multinode-204967.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 cp multinode-204967-m02:/home/docker/cp-test.txt multinode-204967-m03:/home/docker/cp-test_multinode-204967-m02_multinode-204967-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967-m03 "sudo cat /home/docker/cp-test_multinode-204967-m02_multinode-204967-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 cp testdata/cp-test.txt multinode-204967-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 cp multinode-204967-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3025127541/001/cp-test_multinode-204967-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 cp multinode-204967-m03:/home/docker/cp-test.txt multinode-204967:/home/docker/cp-test_multinode-204967-m03_multinode-204967.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967 "sudo cat /home/docker/cp-test_multinode-204967-m03_multinode-204967.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 cp multinode-204967-m03:/home/docker/cp-test.txt multinode-204967-m02:/home/docker/cp-test_multinode-204967-m03_multinode-204967-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 ssh -n multinode-204967-m02 "sudo cat /home/docker/cp-test_multinode-204967-m03_multinode-204967-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-204967 node stop m03: (1.229310289s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-204967 status: exit status 7 (481.525472ms)

                                                
                                                
-- stdout --
	multinode-204967
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-204967-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-204967-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-204967 status --alsologtostderr: exit status 7 (485.147434ms)

                                                
                                                
-- stdout --
	multinode-204967
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-204967-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-204967-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:59:31.018418  177825 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:59:31.018526  177825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:59:31.018530  177825 out.go:374] Setting ErrFile to fd 2...
	I0919 22:59:31.018535  177825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:59:31.018731  177825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 22:59:31.018882  177825 out.go:368] Setting JSON to false
	I0919 22:59:31.018901  177825 mustload.go:65] Loading cluster: multinode-204967
	I0919 22:59:31.018944  177825 notify.go:220] Checking for updates...
	I0919 22:59:31.019246  177825 config.go:182] Loaded profile config "multinode-204967": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 22:59:31.019266  177825 status.go:174] checking status of multinode-204967 ...
	I0919 22:59:31.019702  177825 cli_runner.go:164] Run: docker container inspect multinode-204967 --format={{.State.Status}}
	I0919 22:59:31.039301  177825 status.go:371] multinode-204967 host status = "Running" (err=<nil>)
	I0919 22:59:31.039353  177825 host.go:66] Checking if "multinode-204967" exists ...
	I0919 22:59:31.039674  177825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-204967
	I0919 22:59:31.059918  177825 host.go:66] Checking if "multinode-204967" exists ...
	I0919 22:59:31.060189  177825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:59:31.060230  177825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204967
	I0919 22:59:31.078244  177825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/multinode-204967/id_rsa Username:docker}
	I0919 22:59:31.171553  177825 ssh_runner.go:195] Run: systemctl --version
	I0919 22:59:31.176095  177825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:59:31.187973  177825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:59:31.246425  177825 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-19 22:59:31.234806849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:59:31.246953  177825 kubeconfig.go:125] found "multinode-204967" server: "https://192.168.67.2:8443"
	I0919 22:59:31.246977  177825 api_server.go:166] Checking apiserver status ...
	I0919 22:59:31.247013  177825 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:59:31.259219  177825 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1464/cgroup
	W0919 22:59:31.269649  177825 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1464/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:59:31.269729  177825 ssh_runner.go:195] Run: ls
	I0919 22:59:31.273356  177825 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0919 22:59:31.277378  177825 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0919 22:59:31.277407  177825 status.go:463] multinode-204967 apiserver status = Running (err=<nil>)
	I0919 22:59:31.277416  177825 status.go:176] multinode-204967 status: &{Name:multinode-204967 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:59:31.277429  177825 status.go:174] checking status of multinode-204967-m02 ...
	I0919 22:59:31.277666  177825 cli_runner.go:164] Run: docker container inspect multinode-204967-m02 --format={{.State.Status}}
	I0919 22:59:31.296484  177825 status.go:371] multinode-204967-m02 host status = "Running" (err=<nil>)
	I0919 22:59:31.296509  177825 host.go:66] Checking if "multinode-204967-m02" exists ...
	I0919 22:59:31.296787  177825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-204967-m02
	I0919 22:59:31.314898  177825 host.go:66] Checking if "multinode-204967-m02" exists ...
	I0919 22:59:31.315188  177825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:59:31.315234  177825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-204967-m02
	I0919 22:59:31.333017  177825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32919 SSHKeyPath:/home/jenkins/minikube-integration/21594-14678/.minikube/machines/multinode-204967-m02/id_rsa Username:docker}
	I0919 22:59:31.426260  177825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:59:31.437743  177825 status.go:176] multinode-204967-m02 status: &{Name:multinode-204967-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:59:31.437774  177825 status.go:174] checking status of multinode-204967-m03 ...
	I0919 22:59:31.438020  177825 cli_runner.go:164] Run: docker container inspect multinode-204967-m03 --format={{.State.Status}}
	I0919 22:59:31.456626  177825 status.go:371] multinode-204967-m03 host status = "Stopped" (err=<nil>)
	I0919 22:59:31.456657  177825 status.go:384] host is not running, skipping remaining checks
	I0919 22:59:31.456663  177825 status.go:176] multinode-204967-m03 status: &{Name:multinode-204967-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-204967 node start m03 -v=5 --alsologtostderr: (6.397360961s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (70.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-204967
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-204967
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-204967: (25.01309614s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-204967 --wait=true -v=5 --alsologtostderr
E0919 23:00:28.147242   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-204967 --wait=true -v=5 --alsologtostderr: (45.633491083s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-204967
--- PASS: TestMultiNode/serial/RestartKeepsNodes (70.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-204967 node delete m03: (4.612980343s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-204967 stop: (23.91855398s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-204967 status: exit status 7 (95.118604ms)

                                                
                                                
-- stdout --
	multinode-204967
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-204967-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-204967 status --alsologtostderr: exit status 7 (86.323267ms)

                                                
                                                
-- stdout --
	multinode-204967
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-204967-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 23:01:18.589286  187399 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:01:18.589386  187399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:01:18.589392  187399 out.go:374] Setting ErrFile to fd 2...
	I0919 23:01:18.589396  187399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:01:18.589583  187399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 23:01:18.589768  187399 out.go:368] Setting JSON to false
	I0919 23:01:18.589787  187399 mustload.go:65] Loading cluster: multinode-204967
	I0919 23:01:18.589911  187399 notify.go:220] Checking for updates...
	I0919 23:01:18.590171  187399 config.go:182] Loaded profile config "multinode-204967": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:01:18.590193  187399 status.go:174] checking status of multinode-204967 ...
	I0919 23:01:18.590657  187399 cli_runner.go:164] Run: docker container inspect multinode-204967 --format={{.State.Status}}
	I0919 23:01:18.611074  187399 status.go:371] multinode-204967 host status = "Stopped" (err=<nil>)
	I0919 23:01:18.611124  187399 status.go:384] host is not running, skipping remaining checks
	I0919 23:01:18.611139  187399 status.go:176] multinode-204967 status: &{Name:multinode-204967 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 23:01:18.611214  187399 status.go:174] checking status of multinode-204967-m02 ...
	I0919 23:01:18.611484  187399 cli_runner.go:164] Run: docker container inspect multinode-204967-m02 --format={{.State.Status}}
	I0919 23:01:18.629663  187399 status.go:371] multinode-204967-m02 host status = "Stopped" (err=<nil>)
	I0919 23:01:18.629684  187399 status.go:384] host is not running, skipping remaining checks
	I0919 23:01:18.629690  187399 status.go:176] multinode-204967-m02 status: &{Name:multinode-204967-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (45.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-204967 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-204967 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (45.035318516s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-204967 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (45.67s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-204967
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-204967-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-204967-m02 --driver=docker  --container-runtime=containerd: exit status 14 (77.958245ms)

                                                
                                                
-- stdout --
	* [multinode-204967-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-204967-m02' is duplicated with machine name 'multinode-204967-m02' in profile 'multinode-204967'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-204967-m03 --driver=docker  --container-runtime=containerd
E0919 23:02:11.705436   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:02:25.077603   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-204967-m03 --driver=docker  --container-runtime=containerd: (20.769811785s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-204967
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-204967: exit status 80 (305.922091ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-204967 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-204967-m03 already exists in multinode-204967-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-204967-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-204967-m03: (2.436823965s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.65s)

                                                
                                    
x
+
TestPreload (133.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-078616 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-078616 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m6.655210335s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-078616 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-078616 image pull gcr.io/k8s-minikube/busybox: (2.52831238s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-078616
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-078616: (6.621726425s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-078616 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-078616 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (54.844798459s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-078616 image list
helpers_test.go:175: Cleaning up "test-preload-078616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-078616
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-078616: (2.521738223s)
--- PASS: TestPreload (133.42s)

                                                
                                    
x
+
TestScheduledStopUnix (98.22s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-597610 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-597610 --memory=3072 --driver=docker  --container-runtime=containerd: (22.166298038s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-597610 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-597610 -n scheduled-stop-597610
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-597610 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0919 23:05:08.323983   18210 retry.go:31] will retry after 131.032µs: open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/scheduled-stop-597610/pid: no such file or directory
I0919 23:05:08.325234   18210 retry.go:31] will retry after 193.53µs: open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/scheduled-stop-597610/pid: no such file or directory
I0919 23:05:08.326412   18210 retry.go:31] will retry after 268.922µs: open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/scheduled-stop-597610/pid: no such file or directory
I0919 23:05:08.327587   18210 retry.go:31] will retry after 207.972µs: open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/scheduled-stop-597610/pid: no such file or directory
I0919 23:05:08.328740   18210 retry.go:31] will retry after 469.938µs: open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/scheduled-stop-597610/pid: no such file or directory
I0919 23:05:08.329930   18210 retry.go:31] will retry after 683.3µs: open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/scheduled-stop-597610/pid: no such file or directory
I0919 23:05:08.331105   18210 retry.go:31] will retry after 710.751µs: open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/scheduled-stop-597610/pid: no such file or directory
I0919 23:05:08.332263   18210 retry.go:31] will retry after 2.031902ms: open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/scheduled-stop-597610/pid: no such file or directory
I0919 23:05:08.334591   18210 retry.go:31] will retry after 2.486461ms: open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/scheduled-stop-597610/pid: no such file or directory
I0919 23:05:08.337927   18210 retry.go:31] will retry after 4.233542ms: open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/scheduled-stop-597610/pid: no such file or directory
I0919 23:05:08.343264   18210 retry.go:31] will retry after 3.360699ms: open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/scheduled-stop-597610/pid: no such file or directory
I0919 23:05:08.347589   18210 retry.go:31] will retry after 8.28746ms: open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/scheduled-stop-597610/pid: no such file or directory
I0919 23:05:08.356959   18210 retry.go:31] will retry after 12.337397ms: open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/scheduled-stop-597610/pid: no such file or directory
I0919 23:05:08.370263   18210 retry.go:31] will retry after 10.653838ms: open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/scheduled-stop-597610/pid: no such file or directory
I0919 23:05:08.381585   18210 retry.go:31] will retry after 25.267766ms: open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/scheduled-stop-597610/pid: no such file or directory
I0919 23:05:08.407954   18210 retry.go:31] will retry after 64.315462ms: open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/scheduled-stop-597610/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-597610 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-597610 -n scheduled-stop-597610
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-597610
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-597610 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-597610
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-597610: exit status 7 (72.011021ms)

                                                
                                                
-- stdout --
	scheduled-stop-597610
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-597610 -n scheduled-stop-597610
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-597610 -n scheduled-stop-597610: exit status 7 (73.628779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-597610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-597610
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-597610: (4.536358185s)
--- PASS: TestScheduledStopUnix (98.22s)

                                                
                                    
x
+
TestInsufficientStorage (9.48s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-443125 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-443125 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (6.952198022s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"648262da-8388-47ff-a7c8-08c2d7c8ab9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-443125] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b8065f7-2001-42c5-9f50-4da67881f861","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21594"}}
	{"specversion":"1.0","id":"dc74476f-5afe-4c1b-a739-69b6d8854741","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"69eb2106-5405-4683-bff4-70684695aa5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig"}}
	{"specversion":"1.0","id":"a9015372-4469-4a06-9e0f-40bd8ba18ec0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube"}}
	{"specversion":"1.0","id":"14cf3e74-5fd6-402a-9cad-bb753d414f14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a31f6a01-ef15-4753-9e95-d5d87ebfc608","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6a2710b4-58e8-4175-8236-ff4b9fe4bd99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"372fed70-787c-4584-84cf-08ca72264f3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d147eb47-4fc7-42c2-af9e-c4c5d6d9bb10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d85d7039-5841-4e5a-afcf-b7ea1b78b743","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2eccb54d-c72b-4670-bdf6-7950ee9a5c50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-443125\" primary control-plane node in \"insufficient-storage-443125\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e21c532-efa5-4d51-93af-d01f3801cdb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c6f62e3-749d-458c-91cc-7bea81b71b3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f228e417-bbdd-40fa-a048-a29a01c1c871","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-443125 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-443125 --output=json --layout=cluster: exit status 7 (290.466313ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-443125","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-443125","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 23:06:31.132454  209456 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-443125" does not appear in /home/jenkins/minikube-integration/21594-14678/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-443125 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-443125 --output=json --layout=cluster: exit status 7 (290.637563ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-443125","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-443125","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 23:06:31.422460  209561 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-443125" does not appear in /home/jenkins/minikube-integration/21594-14678/kubeconfig
	E0919 23:06:31.434369  209561 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/insufficient-storage-443125/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-443125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-443125
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-443125: (1.946210301s)
--- PASS: TestInsufficientStorage (9.48s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (44.45s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1193365825 start -p running-upgrade-979158 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1193365825 start -p running-upgrade-979158 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (18.422489063s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-979158 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-979158 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (21.374782402s)
helpers_test.go:175: Cleaning up "running-upgrade-979158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-979158
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-979158: (2.031438821s)
--- PASS: TestRunningBinaryUpgrade (44.45s)

                                                
                                    
x
+
TestKubernetesUpgrade (340.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (24.191778025s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-430859
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-430859: (14.381514495s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-430859 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-430859 status --format={{.Host}}: exit status 7 (80.335713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m41.216593958s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-430859 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (110.406675ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-430859] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-430859
	    minikube start -p kubernetes-upgrade-430859 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4308592 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-430859 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-430859 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.908536787s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-430859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-430859
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-430859: (2.499178905s)
--- PASS: TestKubernetesUpgrade (340.48s)

                                                
                                    
x
+
TestMissingContainerUpgrade (139.08s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
E0919 23:07:11.700600   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2439711122 start -p missing-upgrade-065379 --memory=3072 --driver=docker  --container-runtime=containerd
E0919 23:07:25.077314   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2439711122 start -p missing-upgrade-065379 --memory=3072 --driver=docker  --container-runtime=containerd: (46.033612933s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-065379
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-065379: (3.183825654s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-065379
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-065379 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-065379 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m24.597045686s)
helpers_test.go:175: Cleaning up "missing-upgrade-065379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-065379
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-065379: (2.485614338s)
--- PASS: TestMissingContainerUpgrade (139.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-097820 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-097820 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (105.578067ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-097820] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-097820 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-097820 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.881238899s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-097820 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-097820 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-097820 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (25.538781273s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-097820 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-097820 status -o json: exit status 2 (347.721428ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-097820","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-097820
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-097820: (2.584257645s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-097820 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-097820 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (6.591803357s)
--- PASS: TestNoKubernetes/serial/Start (6.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-097820 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-097820 "sudo systemctl is-active --quiet service kubelet": exit status 1 (270.935328ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-097820
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-097820: (1.213914679s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-097820 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-097820 --driver=docker  --container-runtime=containerd: (7.910413978s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-097820 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-097820 "sudo systemctl is-active --quiet service kubelet": exit status 1 (382.824677ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (59.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.966453558 start -p stopped-upgrade-607986 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.966453558 start -p stopped-upgrade-607986 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (20.114127436s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.966453558 -p stopped-upgrade-607986 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.966453558 -p stopped-upgrade-607986 stop: (11.748380017s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-607986 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-607986 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (27.663564173s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (59.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-607986
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-607986: (1.259641253s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                    
x
+
TestPause/serial/Start (45.18s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-565476 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-565476 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (45.180388688s)
--- PASS: TestPause/serial/Start (45.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-896447 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-896447 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (166.544384ms)

                                                
                                                
-- stdout --
	* [false-896447] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 23:09:49.945942  252734 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:09:49.946223  252734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:09:49.946234  252734 out.go:374] Setting ErrFile to fd 2...
	I0919 23:09:49.946238  252734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:09:49.946533  252734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14678/.minikube/bin
	I0919 23:09:49.947113  252734 out.go:368] Setting JSON to false
	I0919 23:09:49.948455  252734 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6734,"bootTime":1758316656,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:09:49.948548  252734 start.go:140] virtualization: kvm guest
	I0919 23:09:49.951211  252734 out.go:179] * [false-896447] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:09:49.952997  252734 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:09:49.952986  252734 notify.go:220] Checking for updates...
	I0919 23:09:49.956149  252734 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:09:49.958645  252734 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14678/kubeconfig
	I0919 23:09:49.960712  252734 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14678/.minikube
	I0919 23:09:49.962410  252734 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:09:49.963920  252734 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:09:49.966285  252734 config.go:182] Loaded profile config "cert-expiration-175441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:09:49.966423  252734 config.go:182] Loaded profile config "kubernetes-upgrade-430859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:09:49.966553  252734 config.go:182] Loaded profile config "pause-565476": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0919 23:09:49.966681  252734 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:09:49.993448  252734 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:09:49.993533  252734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:09:50.053306  252734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 23:09:50.041371891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:09:50.053420  252734 docker.go:318] overlay module found
	I0919 23:09:50.055460  252734 out.go:179] * Using the docker driver based on user configuration
	I0919 23:09:50.056969  252734 start.go:304] selected driver: docker
	I0919 23:09:50.056992  252734 start.go:918] validating driver "docker" against <nil>
	I0919 23:09:50.057004  252734 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:09:50.058947  252734 out.go:203] 
	W0919 23:09:50.060342  252734 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0919 23:09:50.061942  252734 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-896447 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-896447

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-896447

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-896447

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-896447

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-896447

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-896447

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-896447

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-896447

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-896447

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-896447

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-896447

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-896447" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-896447" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:07:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-175441
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:08:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-430859
contexts:
- context:
cluster: cert-expiration-175441
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:07:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-175441
name: cert-expiration-175441
- context:
cluster: kubernetes-upgrade-430859
user: kubernetes-upgrade-430859
name: kubernetes-upgrade-430859
current-context: ""
kind: Config
users:
- name: cert-expiration-175441
user:
client-certificate: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/cert-expiration-175441/client.crt
client-key: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/cert-expiration-175441/client.key
- name: kubernetes-upgrade-430859
user:
client-certificate: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kubernetes-upgrade-430859/client.crt
client-key: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kubernetes-upgrade-430859/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-896447

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-896447"

                                                
                                                
----------------------- debugLogs end: false-896447 [took: 3.307519695s] --------------------------------
helpers_test.go:175: Cleaning up "false-896447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-896447
--- PASS: TestNetworkPlugins/group/false (3.66s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.85s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-565476 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-565476 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.836459406s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.85s)

                                                
                                    
x
+
TestPause/serial/Pause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-565476 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.87s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-565476 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-565476 --output=json --layout=cluster: exit status 2 (336.664238ms)

                                                
                                                
-- stdout --
	{"Name":"pause-565476","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-565476","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-565476 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.81s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-565476 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.96s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-565476 --alsologtostderr -v=5
I0919 23:10:26.579149   18210 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0919 23:10:26.579364   18210 install.go:123] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3623341017/001:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0919 23:10:26.615230   18210 install.go:134] /tmp/TestKVMDriverInstallOrUpdate3623341017/001/docker-machine-driver-kvm2 version is {Version:v1.1.1 Commit:40a1a986a50eac533e396012e35516d3d6c63f36-dirty}
W0919 23:10:26.615269   18210 install.go:61] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0 or later
W0919 23:10:26.615381   18210 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0919 23:10:26.615438   18210 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3623341017/001/docker-machine-driver-kvm2
I0919 23:10:27.895551   18210 install.go:123] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3623341017/001:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0919 23:10:27.915482   18210 install.go:134] /tmp/TestKVMDriverInstallOrUpdate3623341017/001/docker-machine-driver-kvm2 version is {Version:v1.37.0 Commit:1af8bdc072232de4b1fec3b6cc0e8337e118bc83}
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-565476 --alsologtostderr -v=5: (2.96182559s)
--- PASS: TestPause/serial/DeletePaused (2.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (57.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-757990 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-757990 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (57.434898192s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (57.44s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-565476
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-565476: exit status 1 (21.729687ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-565476: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (77.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-364197 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-364197 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (1m17.699773748s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (77.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (99.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-403962 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-403962 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (1m39.334483035s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (99.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-757990 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a5f16e84-4ad5-4669-92c0-b409236ac87c] Pending
helpers_test.go:352: "busybox" [a5f16e84-4ad5-4669-92c0-b409236ac87c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a5f16e84-4ad5-4669-92c0-b409236ac87c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003878225s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-757990 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-757990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-757990 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-757990 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-757990 --alsologtostderr -v=3: (12.118747171s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-757990 -n old-k8s-version-757990
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-757990 -n old-k8s-version-757990: exit status 7 (71.667177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-757990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-757990 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-757990 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (43.261460199s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-757990 -n old-k8s-version-757990
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-364197 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [77ffa090-4e73-444b-a6fc-8ef6814b0618] Pending
helpers_test.go:352: "busybox" [77ffa090-4e73-444b-a6fc-8ef6814b0618] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [77ffa090-4e73-444b-a6fc-8ef6814b0618] Running
E0919 23:11:54.780707   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005041451s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-364197 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-364197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-364197 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-364197 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-364197 --alsologtostderr -v=3: (12.075814375s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-364197 -n no-preload-364197
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-364197 -n no-preload-364197: exit status 7 (79.953451ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-364197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (86.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-364197 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0919 23:12:11.701859   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:12:25.077398   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-364197 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (1m26.31786198s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-364197 -n no-preload-364197
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (86.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-403962 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8412a9d8-1dcb-4b3f-9d69-08a96941cf75] Pending
helpers_test.go:352: "busybox" [8412a9d8-1dcb-4b3f-9d69-08a96941cf75] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8412a9d8-1dcb-4b3f-9d69-08a96941cf75] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004186556s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-403962 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-ncznp" [9d0b47dc-ccdd-4ab2-a31f-bc9e026fb1ad] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004762603s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-403962 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-403962 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-ncznp" [9d0b47dc-ccdd-4ab2-a31f-bc9e026fb1ad] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004165647s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-757990 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-403962 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-403962 --alsologtostderr -v=3: (12.134430334s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-757990 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-757990 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-757990 -n old-k8s-version-757990
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-757990 -n old-k8s-version-757990: exit status 2 (325.823284ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-757990 -n old-k8s-version-757990
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-757990 -n old-k8s-version-757990: exit status 2 (334.447418ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-757990 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-757990 -n old-k8s-version-757990
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-757990 -n old-k8s-version-757990
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (133.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-149888 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-149888 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (2m13.165120871s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (133.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-403962 -n embed-certs-403962
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-403962 -n embed-certs-403962: exit status 7 (79.202496ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-403962 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-403962 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-403962 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (51.676868933s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-403962 -n embed-certs-403962
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-312465 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-312465 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (36.330519176s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rj7g8" [6996c6a8-a114-41bb-a444-9458acee68d4] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003716241s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9hzq9" [460edd2f-380a-43fe-8d40-efd712ff663e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002916855s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rj7g8" [6996c6a8-a114-41bb-a444-9458acee68d4] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rj7g8" [6996c6a8-a114-41bb-a444-9458acee68d4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003527529s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-364197 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9hzq9" [460edd2f-380a-43fe-8d40-efd712ff663e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004498008s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-403962 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-364197 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-403962 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (47.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (47.309754737s)
--- PASS: TestNetworkPlugins/group/auto/Start (47.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-312465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-312465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.37863744s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-312465 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-312465 --alsologtostderr -v=3: (2.352597312s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-312465 -n newest-cni-312465
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-312465 -n newest-cni-312465: exit status 7 (86.573102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-312465 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-312465 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-312465 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (12.069778765s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-312465 -n newest-cni-312465
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (48.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (48.636080001s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (48.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-312465 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (165.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (2m45.649949453s)
--- PASS: TestNetworkPlugins/group/calico/Start (165.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-896447 "pgrep -a kubelet"
I0919 23:14:50.383711   18210 config.go:182] Loaded profile config "auto-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-896447 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6ln77" [039e252e-041e-47e8-bd05-4f9ac45d92cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6ln77" [039e252e-041e-47e8-bd05-4f9ac45d92cb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003658886s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-g97kp" [1d8ebe06-9436-4fe3-b7d8-f9cd0281feb3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003971798s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-896447 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-896447 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-896447 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-149888 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5b9ca5d0-9364-41e1-9258-0d9ac0d75b1c] Pending
helpers_test.go:352: "busybox" [5b9ca5d0-9364-41e1-9258-0d9ac0d75b1c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
I0919 23:15:02.424372   18210 config.go:182] Loaded profile config "kindnet-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
helpers_test.go:352: "busybox" [5b9ca5d0-9364-41e1-9258-0d9ac0d75b1c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003845438s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-149888 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-896447 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-896447 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zwxtm" [914dd5ca-c640-4009-a7d7-a8c1375c0878] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zwxtm" [914dd5ca-c640-4009-a7d7-a8c1375c0878] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004528198s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-149888 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-149888 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-896447 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-896447 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-896447 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-149888 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-149888 --alsologtostderr -v=3: (13.015218522s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (623.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (10m23.297252454s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (623.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888: exit status 7 (88.651714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-149888 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-149888 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-149888 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (51.998232898s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-149888 -n default-k8s-diff-port-149888
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (104.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m44.41456208s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (104.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tjkd6" [ace75e48-d784-47f9-9a45-4f907bf540b5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005804617s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tjkd6" [ace75e48-d784-47f9-9a45-4f907bf540b5] Running
E0919 23:16:25.509392   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/old-k8s-version-757990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:25.515828   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/old-k8s-version-757990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:25.527331   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/old-k8s-version-757990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:25.548828   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/old-k8s-version-757990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:25.590308   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/old-k8s-version-757990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:25.671876   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/old-k8s-version-757990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:25.833511   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/old-k8s-version-757990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:26.155946   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/old-k8s-version-757990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:26.797727   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/old-k8s-version-757990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:28.079304   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/old-k8s-version-757990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004125399s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-149888 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-149888 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (131.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0919 23:16:46.004463   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/old-k8s-version-757990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:48.128316   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/no-preload-364197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:48.134865   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/no-preload-364197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:48.146531   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/no-preload-364197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:48.168017   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/no-preload-364197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:48.209519   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/no-preload-364197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:48.291026   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/no-preload-364197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:48.452719   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/no-preload-364197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:48.774494   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/no-preload-364197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:49.416741   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/no-preload-364197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:50.698235   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/no-preload-364197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:53.259881   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/no-preload-364197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:16:58.382758   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/no-preload-364197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:17:06.486206   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/old-k8s-version-757990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:17:08.149541   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:17:08.624969   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/no-preload-364197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:17:11.700468   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/addons-019551/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (2m11.181957674s)
--- PASS: TestNetworkPlugins/group/flannel/Start (131.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-896447 "pgrep -a kubelet"
I0919 23:17:17.589594   18210 config.go:182] Loaded profile config "enable-default-cni-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-896447 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8d2k7" [a9d9692a-2a94-4236-a923-9ce1e1a5db72] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8d2k7" [a9d9692a-2a94-4236-a923-9ce1e1a5db72] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004811881s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-r2l7z" [ebd611c4-409f-4cc4-8508-4c5892b758d8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003411301s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-896447 "pgrep -a kubelet"
I0919 23:17:24.281298   18210 config.go:182] Loaded profile config "calico-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-896447 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x9d4s" [9a47d747-9f7e-476c-bf25-880654fc0d43] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 23:17:25.077467   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/functional-541880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-x9d4s" [9a47d747-9f7e-476c-bf25-880654fc0d43] Running
E0919 23:17:29.106324   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/no-preload-364197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003904616s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-896447 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-896447 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-896447 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-896447 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-896447 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-896447 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0919 23:17:47.448452   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/old-k8s-version-757990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-896447 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m3.362530305s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-896447 "pgrep -a kubelet"
I0919 23:18:51.076885   18210 config.go:182] Loaded profile config "bridge-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-896447 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kgwf4" [36f99e9d-442c-41ad-9929-ad43a9e61634] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kgwf4" [36f99e9d-442c-41ad-9929-ad43a9e61634] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005623706s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-72plf" [cf500be4-8d81-443f-b0ae-4337d7d35c62] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003784157s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-896447 "pgrep -a kubelet"
I0919 23:19:00.131323   18210 config.go:182] Loaded profile config "flannel-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-896447 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5zmpx" [c829afe3-d1fe-4480-be6b-997aafd8408f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5zmpx" [c829afe3-d1fe-4480-be6b-997aafd8408f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003694751s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-896447 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-896447 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-896447 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-896447 exec deployment/netcat -- nslookup kubernetes.default
E0919 23:19:09.370401   18210 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/old-k8s-version-757990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-896447 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-896447 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-896447 "pgrep -a kubelet"
I0919 23:25:43.219572   18210 config.go:182] Loaded profile config "custom-flannel-896447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-896447 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5747n" [4ad2276d-8677-451f-ac63-a308989cf7ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5747n" [4ad2276d-8677-451f-ac63-a308989cf7ef] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004371116s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-896447 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-896447 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-896447 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    

Test skip (25/329)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-606373" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-606373
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-896447 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-896447

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-896447

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-896447

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-896447

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-896447

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-896447

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-896447

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-896447

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-896447

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-896447

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-896447

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-896447" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-896447" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:07:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-175441
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:08:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-430859
contexts:
- context:
cluster: cert-expiration-175441
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:07:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-175441
name: cert-expiration-175441
- context:
cluster: kubernetes-upgrade-430859
user: kubernetes-upgrade-430859
name: kubernetes-upgrade-430859
current-context: ""
kind: Config
users:
- name: cert-expiration-175441
user:
client-certificate: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/cert-expiration-175441/client.crt
client-key: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/cert-expiration-175441/client.key
- name: kubernetes-upgrade-430859
user:
client-certificate: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kubernetes-upgrade-430859/client.crt
client-key: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kubernetes-upgrade-430859/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-896447

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-896447"

                                                
                                                
----------------------- debugLogs end: kubenet-896447 [took: 3.652221155s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-896447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-896447
--- SKIP: TestNetworkPlugins/group/kubenet (3.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-896447 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-896447

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-896447

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-896447

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-896447

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-896447

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-896447

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-896447

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-896447

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-896447

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-896447

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-896447

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-896447" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-896447

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-896447

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-896447

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-896447

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-896447" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-896447" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:07:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-175441
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14678/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:08:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-430859
contexts:
- context:
cluster: cert-expiration-175441
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:07:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-175441
name: cert-expiration-175441
- context:
cluster: kubernetes-upgrade-430859
user: kubernetes-upgrade-430859
name: kubernetes-upgrade-430859
current-context: ""
kind: Config
users:
- name: cert-expiration-175441
user:
client-certificate: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/cert-expiration-175441/client.crt
client-key: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/cert-expiration-175441/client.key
- name: kubernetes-upgrade-430859
user:
client-certificate: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kubernetes-upgrade-430859/client.crt
client-key: /home/jenkins/minikube-integration/21594-14678/.minikube/profiles/kubernetes-upgrade-430859/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-896447

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-896447" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896447"

                                                
                                                
----------------------- debugLogs end: cilium-896447 [took: 3.90989737s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-896447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-896447
--- SKIP: TestNetworkPlugins/group/cilium (4.16s)

                                                
                                    
Copied to clipboard